Richard Timberlake and the Case for Monetary Rules

Renee Haltom interviewed Richard Timberlake, perhaps best-known as a staunch supporter of fixed rules rather than government discretion for monetary policy, in Econ Focus, a publication of the Federal Reserve Bank of Richmond (First Quarter 2014, pp. 24-29). Here\’s a sample of Timberlake\’s views:

he argues that the Fed is inevitably subject to political influence.

\”Until maybe 10 or 20 years ago, economists who studied money felt that they could prescribe some logical policy for the Federal Reserve, and ultimately the Fed would see the light and follow it. That proved illusory. A central bank is essentially a government agency, no matter who “owns” it. The Fed’s titular owners are the member banks, but the national government has all the controls over the Fed’s policies and profits. And as with all government agencies, the Fed is subject to public choice pressures and motives.\”

If the Federal Reserve followed a firm rule, he argues, asset bubbles would be unlikely.

The Fed shouldn’t pay any heed at all to asset bubbles. If it followed rigorously a constrained price level, or quantity-of-money rule, I don’t think there would be bubbles. Markets would anticipate stability. Markets today, however, anticipate, with good reason, all the government interventions that lead to bubbles. If we had a stable price level policy and everybody understood it and believed it would continue, there wouldn’t be any serious bubbles. We don’t even know whether the 1929 “bubble” was even a bubble, because after the Fed’s unwitting destruction of bank credit, no one could distinguish in the rubble what was sound from what might have been unsound.

If lender of last resort services are needed, he argues, the private sector could provide them.

Private institutions will always furnish lender of last resort services if markets are free to operate and if there are no government policies in place that cause destabilization. In the last half of the 19th century, the private clearinghouse system was a lender of last resort that worked perfectly. Its activities demonstrated that private markets handle the lender of last resort function better than any government-sponsored institution.

The overall impression from the interview is that Timberlake is open to a variety of monetary rules, as long as the rules are written in stone. He offers positive remarks about a gold standard, about a monetary policy focused solely on the price level, and a monetary policy that would involve a fixed rate of growth in the money supply. As one example, he cites discussed his reaction to the rule Milton Friedman proposed in the 1970s for a fixed rate of growth in the money supply.

\”Friedman recommended a steadily increasing quantity of money — that is, bank checking deposits and currency —between 2 and 5 percent per year. Prices might rise or fall a little, but everybody would know that things were going to get better or be restrained simply because the Fed had to follow a quantity-of-money rule. I wrote him a letter at the time and remarked, “I agree with your idea of a stable rate of increase in the quantity of money, and I suggest a rate of 3.65 percent per year, and 3.66 percent for leap years — 1/100 of 1 percent per day.”

I can feel the pull of Timberlake\’s view, swirling around my ankles, but I am not persuaded. When you lash yourself to the mast,  as Odysseus did to resist the call of the Sirens, you are indeed constrained from giving in to temptation. But if an unforeseen problem arises while you have lashed yourself to the mast, you are incapacitated from dealing with the problem. As Timberlake readily concedes, having the Federal Reserve surrender all discretion is not at all likely. Thus, the pragmatic questions are about what kinds of constraints on the Fed, including a continual process of transparency and self-explanation, are most useful.

As a coda, Timberlake has a nice story about Milton Friedman offering him some key advice when he was a graduate student.

I recall the time when I presented a potential Ph.D. thesis proposal at Chicago to the economics department. The audience included professors and many able graduate students. I could feel that my presentation was not going over very well. After the ordeal was over, Friedman said to me, “Come back up to my office.” When we were there, he said, “The committee and the department think that your thesis proposal has less than a 0.5 probability of acceptance.” I knew that was coming, and I despondently replied that I had had a very frustrating time “finding a thesis.” My words suggested that a thesis was a bauble that one found in a desert of intellect that no one else had discovered. It was then that Milton Friedman turned me around and started me on the road to being an economist. “Dick,” he said, “theses are formed, not found.” It was the single most important event in my professional life. I finally could grasp what economic research was supposed to be.

The Secular Stagnation Controversy

For economists, the word \”secular\” isn\’t about a lack of religious belief. Instead, it\’s refers to whether a condition is expected to last for a long and indefinite period–and in particular, a period not related to whether the economy is entering or exiting a recession. Thus, the concept of \”secular stagnation\” is the idea that the U.S. economy is not just suffering through the aftereffects of the Great Recession, but is for some reason entering a longer-term period of stagnant growth. Coen Teulings and Richard Baldwin, who have edited a useful e-book of 13 short essays with a variety of perspectives on Secular Stagnation:  Facts, Causes and Cures. In the overview, they write:  “Secular stagnation, we have learned, is an economist’s Rorschach Test. It  means different things to different people.\”

I\’ve taken a couple of previous cracks at secular stagnation on this blog. I discussed the
original theory of secular stagnation as put forward in 1938 in \”Secular Stagnation: Back to Alvin Hansen\” (December 12, 2013). Hansen was concerned that in the depressed economy of his time, with lower birthrates and a lack of discoveries of new resources and territories, the push of new inventions would not be enough to keep investment levels high and the economy growing. I have also discussed \”Sluggish U.S. Investment\” (June 27, 2014) in the context of a discussion of secular stagnation by Larry Summers.  Here, let me give a sense of how a range of economists are looking at different aspects of the  \”secular stagnation\” issue by quoting (without prejudice against the other essays!) a few sentences from six of the essays.

Larry Summers: \”This chapter explains why a decline in the full-employment real interest rate (FERIR) coupled with low inflation could indefinitely prevent the attainment of full employment.  . . . Broadly, to the extent that secular stagnation is a problem, there are two possible strategies for addressing its pernicious impacts. … The first is to find ways to further reduce real interest rates. These might include operating with a higher inflation rate target so that a zero  nominal rate corresponds to a lower real rate. Or it might include finding ways such  as quantitative easing that operate to reduce credit or term premiums. These strategies have the difficulty of course that even if they increase the level of output, they are also likely to increase financial stability risks, which in turn may have output consequences. … The alternative is to raise demand by increasing investment and reducing saving. … Appropriate strategies will vary from country to country and situation to situation. But they should include increased public investment, reductions in structural barriers to private investment and measures to promote business confidence, a commitment to maintain basic social protections so as to maintain spending power, and measures to reduce inequality and so redistribute income towards those with a higher propensity to spend.\”

Barry Eichengreen: \”Pessimists have been predicting slowing rates of invention and innovation for centuries, and they have been consistently wrong. This chapter argues that if the US does experience secular stagnation over the next decade or two, it will be self-inflicted. The US must address its infrastructure, education, and training needs. Moreover, it must support aggregate demand to repair the damage caused by the Great Recession and bring the long-term unemployed back into the labour market.\”

Robert J Gordon: \”US real GDP has grown at a turtle-like pace of only 2.1% per year in the last four years, despite a rapid decline in the unemployment rate from 10% to 6%. This column argues that US economic growth will continue to be slow for the next 25 to 40 years – not because of a slowdown in technological growth, but rather because of four ‘headwinds’: demographics, education, inequality, and government debt.\”

Paul Krugman: \”Larry Summers’ speech at the IMF’s 2013 Annual Research Conference raised the
spectre of secular stagnation. This chapter outlines three reasons to take this possibility seriously: recent experience suggests the zero lower bound matters more than previously thought; there had been a secular decline in real interest rates even before the Global Crisis; and deleveraging and demographic trends will weaken future demand. Since even unconventional policies may struggle to deal with secular stagnation, a major rethinking of macroeconomic policy is required.\”

Edward L Glaeser: \”US investment and innovation – the most standard ingredients in long-run economic growth – are not declining. The technological world that surrounds us is anything but stagnant. Yet we can have little confidence that the continuing flow of new ideas will solve the US’s most worrying social trend: the 40-year secular rise in the number and share of jobless adults. … The massive secular trend in joblessness is a terrible social problem for the US, and one that the country must try to address. I do not believe that this is a macroeconomic problem that can be solved with more investment or tax cuts alone.  . . . Alongside targeted investments in education and training, radical structural reforms to America’s safety net are needed to ensure it does less to  discourage employment.\”

Gauti B. Eggertsson and Neil Mehrotra: \”Japan’s two-decade-long malaise and the Great Recession have renewed interest in the secular stagnation hypothesis, but until recently this theory has not been explicitly formalised. This chapter explains the core logic of a new model that does just that. In  the model, an increase in inequality, a slowdown in population growth, and a tightening of borrowing limits all reduce the equilibrium real interest rate. Unlike in other recent models, a period of deleveraging puts even more downward pressure on the real interest rate so that it becomes permanently negative.\”

Richard C. Koo: \”The Great Recession is often compared to Japan’s stagnation since 1990 and the Great Depression of the 1930s. This chapter argues that the key feature of these episodes is the bursting of a debt-financed asset bubble, and that such ‘balance sheet recessions’ take a long time to recover from. There is no need to suffer secular stagnation if the government offsets private sector deleveraging with fiscal stimulus. However, until the general public understands the fallacy of composition, democracies will struggle to implement such policies during balance sheet recessions.\”

Volumes like this feel a bit like the parable of the blind men and the elephant, where each man grabs one part of the elephant and then declares what an elephant feels like, depending on whether the man has a leg, tail, trunk, ear, tusk, side, or belly of the elephant. It\’s easy to grab hold of one part of the economy, but it can be difficult to see the interactions across the parts, or to see it as a whole.

Outsource Corporate Boards?

Many economists have been distinctly uncomfortable with the notion of a company owned by shareholders but run by corporate executives hired by a board of directors since at least 1932, when Adolf A. Berle, Jr., and Gardiner C. Means wrote a book called \”The Modern Corporation and Private Property.\” The early decades of the 20th century saw a huge transformation of the ownership of large U.S. companies, away from being owned (or effectively controlled) by a family or an individual, by and toward being owned by shareholders.

\”In 1928, when the project was launched, the financial machinery was developing so rapidly as to indicate that we were in  the throes of a revolution in our institution of private property, at least as applied to industrial economic uses.  … The translation of perhaps two-thirds of the Industrial wealth of the country from individual ownership to ownership by the large, publicly financed corporations vitally changes the lives of property owners, the lives of workers, and the methods of property tenure. The divorce of ownership from control consequent on that process almost necessarily involves a new form of economic organization of society.\” 

The \”separation of ownership and control,\” as it is often called, has been an ongoing problem ever since. The well-founded concern is that the board of directors, which is supposed to function on behalf of the shareholders who technically own the company, is instead effectively chosen by corporate management. There have been periodic pushes for corporate board to have broader representation, or members from outside the circles of that industry, or with greater independence from management. But ultimately, most board members are part-timers who parachute in a few times a year for board meetings. They often lack information and incentives to oversee or tow challenge corporate management effectively.

Stephen M. Bainbridge and M. Todd Henderson offer an alternative vision of how corporate boards might work in \”Boards-R-Us: Reconceptualizing Corporate Boards,\” which appears in the May 2014 issue of the Stanford Law Review. They write (footnotes omitted):

Almost every corporate governance reform proposed over the past several decades has focused on the board of directors. . . .This battle is fought on the grounds of who board members are, whether they are independent, who appoints them, how they are elected, how they are compensated, what the standards for their conduct and liability are, whether there should be more independent directors, what the optimal board size is, and so forth. All of these reforms are an attempt to optimize the monitoring and governance role played by the board. Despite the long and zealous efforts of corporate law reformers to understand and improve the board of directors, there is a gaping hole in the corporate governance literature. No one has yet questioned a fundamental assumption of the current corporate governance model—that is, only individuals, acting as sole proprietors, should provide professional board services. 

Bainbridge and Henderson propose that when a a firm is choosing a board of directors, instead of hiring a group of individuals to be on the board, the firms should be allowed to hire a \”board service provider,\” an outside company that would provide board of director services to the firm. They write:

In other words, just as companies outsource their external audit function to an accounting firm rather than multiple individuals, the board of directors function would be outsourced to a professional services company. To see our idea, imagine a firm, Boards-R-Us, Inc., serving as the board of Acme Co. Instead of Acme shareholders hiring a dozen or so individual sole proprietors to provide board functions, they instead hire one firm—a BSP—to provide those functions, whatever they may be.22 Boards-R-Us would still act through individual agents, but the responsibility for managing a particular firm, within the meaning of state corporate law, would be that of Boards-R-Us the entity. This means, for instance, that a suit by shareholders for breach of the board’s fiduciary duties would be against Boards-R-Us, and not against individuals
or groups of individuals.

A company acting as board service provider would continue to make all the same decisions as a current board of directors: that is, hiring and firing top management, setting compensation, having final approval over major decisions like takeovers and mergers, and so on. As the authors write: \”the basic version of our proposal is substantially similar to the current board model, with the one key difference that the board consists of an “it” instead of a collection of individuals.\” Indeed, in choosing a board of directors, it would be possible to have a slate of individuals run against a board service provider–or against several different board service providers. It would be possible to have a board of directors that was, say, half made up of a board service provider, while the other half was the typical individual board members chosen separately by shareholders.

What\’s the case for believing that, at least for some companies, a board service provider company might be an  improvement? One set of argument is that current boards of directors often face problems of limited time, limited information, and a lack of specialist expertise. A board service provider might be well-positioned to have full-time providers of board services, with access to both internal and external sources of information, and the ability to draw on specialist expertise.

And what about the risk that if we are already worried about mutual backrubs between boards of directors and top management, the problem might get even worse if there was only a single board service provider? This concern seems legitimate, but it\’s worth remembering just how incestuous bad some of the current board situations are. Bainbridge and Henderson remind us that when the board of directors at Disney decided that Michael Eisner deserved $140 million for one year of work, the board included a number of Eisner\’s friends, \”including actor Sidney Poitier, the principal of the elementary school Eisner’s children attended, and the architect who designed one of Eisner’s homes.\” More recently, the media conglomerate IAC, chaired by Barry Diller, \”appointed thirty-one-year-old graduate student Chelsea Clinton to the board. …  [F]ormer board members of IAC include Diller’s wife, the fashion designer Diane von Furstenberg, and General Norman Schwarzkopf, and … the current board also includes von Furstenberg’s son, Alex.\”  

Given that the oversight of current boards of directors is often pretty low, Bainbridge and Henderson argue that board service providers \”would be more accountable than the group of individuals currently providing board services; indeed, we believe that the accountability of the whole would be greater than the sum of the liabilities of the parts.\” They argue that a board service provider might worry more about reputation than a random individual board member, and also that a company providing board services might be more susceptible to legal oversight and liability.

Allowing companies to become board service providers is no magic potion to solve all the problems of corporate governance. But more than 80 years after Berle and Means described the problems that arise from a separation of corporate ownership and control, any new proposals for addressing it are welcome.

\”

From The Economist, August 16.

Does Economics Education Teach Students to Trust?

Last March, I discusses some of the studies on the question, \”Does Economics Make You a Bad Person?\” (March 31, 2014). In the Spring 2014 issue of the American Economist, Bryan C. McCannon offers some additional evidence on the question in: \”Do Economists Play Well With Others? Experimental Evidence on the Relationship between Economics Education and Pro Social Behavior\” (59:1, pp. 27-33). The journal is not freely available on-line, although many readers will have access through a library subscription.

The guts of the paper is an experiment with 147 students \”conducted with undergraduate students at a small, private university in upstate New York.\” McCarron teaches at St. Bonaventure University, so you can draw your own conclusions about the identity of the school. Some of the students had already taken \”a significant amount of coursework in economics,\” some are planning to study economics but haven\’t yet taken economics courses, and some have neither had economics classes nor are planning to take them.

The students participated in a \”trust game,\” which has two players. The first player is given a certain amount of money–in this study, $5. The first player decides how much to give to the second player. But here\’s a twist: the amount given to the second player is tripled. Then, the second player decides whether to give some money back to the first player. The game ends there. The students played the game five times, but with a random and changing selection of opponents each time

Clearly, if the first player fully trusts the second one, the first player will give the full $5 to the second player. The amount will be tripled in transit, and the second player will be able to return the full $5, plus more, to the first player. However, a first player who is less trusting may give less than the full $5, or nothing at all, to the second player, because after all, the second player may just hold on to all the money and not return any of it. Thus, the question is whether students who have taken a lot of economic classes tend to be more or less trusting than other groups.

A typical finding in a trust game is that the first player gives half the money to the second player. The second player then returns about 80% of the money invested, and keeps the rest. Thus, trust often does not pay off for the first player–which helps to explain why they venture to pass along only half of the original sum.

In this study, it turns out that when taking the role of the first player, \”[e]ach class a student takes contributes approximately ten cents more.\” When taking the role of the second player, \”[t]aking more economics courses is associated with escalated rates of reciprocation. Approximately fifteen more cents is given back if given all five dollars, which represents a 3.5% increase.\” McCarron also gave the participants an attitudinal survey before playing the game, and when analyzing the survey results together with the game results, he argues that those who are selecting themselves into economics classes are more likely to practice trust and reciprocity.

This study follows several common patterns in this literature. The group being studied is a relatively small group of students at one institution, so there is a reasonable question about whether the results would generalize to a broader population. The engine of inquiry is a structured \”laboratory experiment,\” in this case the trust game, and so there is a reasonable question about whether the motivations revealed in such studies would show up in other behaviors and contexts.

But although the results of these kinds of studies shouldn\’t be oversold, it\’s not shocking to me to find that those who study economics may be more likely to look at a trust game and see it as an opportunity for a cooperative exchange that can benefit both parties. Indeed, economists may well be more prone than non-economists to seeing the world as a place full of voluntarily agreed transactions that can represent a win for both parties.

New Business Establishments: The Shift to Existing Firms

A new business \”establishment\” occurs when a firm opens up at a new geographical location. A new \”establishment\” can thus occur for one of two reasons: either it\’s a brand-new firm started by an entrepreneur (say, if I start my own fast-food restaurant), or it\’s a new or additional location for an existing firm (say, when Subway or McDonald\’s opens a new store). In \”The Shifting Source of New Business Establishments and New Jobs,\” an \”Economic Commentary\” written for the Fedearl Reserve Bank of Cleveland (August 21, 2014) Ian Hathaway, Mark E. Schweitzer, and Scott Shane make the point that when it comes to new establishments in the U.S. economy, existing firms are playing a larger role.

As a starting point, consider how the rate at which new establishments of both kinds are being born has changed over time. Hathaway, Schweitzer, and Shane note: \”As the figure shows, in 1978, Americans created 12.0 new firms per business establishment. By 2011, the latest year data are available, they generated new firms at roughly half that rate—6.2 new firms per existing business establishment. By contrast, in 1978, Americans created 1.7 new outlets per existing establishment, while in 2011 they created 2.6—an increase of more than half.\”

I wrote about about the decline of the top line–the line showing the entrepreneurs starting new firms–in \”The Decline of U.S. Entrepreneurship\” earlier this month.

Is this shift toward establishments started by existing firms just what one might call a \”Walmart effect\”–that is, big-box stores in the retail industry driving out smaller Mom-and-Pop operations? Apparently not. The share of new establishments that are new outlets of existing firms is rising across across all industries, not just retail

This change has implications for the sources of job creation in the U.S. economy. Hathaway, Schweitzer, and Shane write: \”To give a sense of the magnitude of the changing sources of job creation, we can estimate the number of new jobs that new firms would have created had they continued to generate jobs at the rate they did back in 1978 and the number of new jobs that new outlets would have created had they continued to generate jobs at the rate they did back in 1978. At the 1978 rate of new firm job creation, new firms would have produced an additional 2.4 million jobs in 2011, or 90 percent more. At the 1978 rate of new outlet job creation, new outlets would have produced 828,000 fewer jobs in 2011, or 34 percent less.\”

As to underlying reasons for the change, these authors note that their underlying data doesn\’t allow them to pinpoint any particular cause. However, \”[W]e can offer one hypothesis: Growth in information and communication technologies since the late 1970s have facilitated the coordination of multiple establishments, offering existing businesses an advantage over new firms when setting up new establishments to meet the need for new business locations.\”

This explanation seems plausible to me, but some other possibilities seem plausible, too. The new information and communications technology may also make it easier to establish brand names, so that firms in sectors like Finance, Insurance, and Real Estate or in Services are more likely to be part of a national firm, rather than opening up a stand-alone personal shop. I also suspect that the regulatory burden of starting a business has gotten harder over time, in terms of the rules, regulations, and permits that need to be followed for the physical property of the business, for dealing with employees, for meeting requirements about the qualities of the product that are provided, and for taxes and accounting. There\’s a case for each individual rule and regulation. But when you pile them all up, the burden can become discouragingly high for a potential entrepreneur.

Property Rights and Saving the Rhino

South Africa is the home for 75% of the world\’s population of black rhinos and  96% of the world\’s population of white rhinos. There must be some lessons for conservationists behind those statistics.
Michael \’t Sas-Rolfes tells the story in \”Saving African Rhinos: A Market Success Story,\” written as a case study for the Property and Environment Research Center (PERC).

The story isn\’t just about markets. In 1900, the white rhinoceros had been hunted almost to extinction, with about 20 remaining in a single game preserve in South Africa. The population slowly recovered a bit, and by the middle of the 20th century, there were enough to start relocating breeding groups of white rhinos to other national parks in South Africa, as well as private game ranches. In 1968, the first legal hunt of a white rhino was authorized.

But by the 1980s, Sas-Rolfes reports, a strange disjunction had emerged. In 1982, the Natal Parks Board had a list price for a white rhino of about 1,000 South African rands, but the average price paid by a hunter for a rhino trophy that year was 6,000 rands. Private game preserves were quick to take advantage of the arbitrage opportunity. The Natal Parks Board soon began auctioning its rhinos. In 1989, it was selling rhinos for 49,000 rand, but the average price to a hunter for a rhino trophy had risen to 92,000 rand. There were obvious questions about whether this system of raising and hunting rhinos was a useful tool from a broader environmental perspective.

But property rights and markets enter the story in a different way in 1991.

Before 1991, all wildlife in South Africa was treated by law as res nullius or un-owned property. To reap the benefits of ownership from a wild animal, it had to be killed, captured, or domesticated. This created an incentive to harvest, not protect, valuable wild species—meaning that even if a game rancher paid for a rhino, the rancher could not claim compensation if the rhino left his property or was killed by a poacher. . . . Recognizing the problems associated with the res nullius maxim, the commission drafted a new piece of legislation: the Theft of Game Act of 1991. This policy allowed for private ownership of any wild animal that could be identified according to certain criteria such as a brand or ear tag. The combined effect of market pricing through auctions and the creation of stronger property rights over rhinos changed the incentives of private ranchers. It now made sense to breed rhinos rather than shoot them as soon as they were received.

For a sense of how much difference these issues of property rights and incentives can make to conservation, consider the difference in populations between black and white rhinos. Sas-Rolfes explains: \”Figure 2 shows trends in white rhino numbers from 1960 until 2007. Contrast those
numbers with the black rhino, which mostly lived in African countries with weak or absent wildlife market institutions such as Kenya, Tanzania, and Zambia. In 1960, about 100,000 black rhinos roamed across Africa, but by the early 1990s poachers had reduced their numbers to less than 2,500. . . . Unprotected wild rhino populations are rare to non-existent in modern Africa. The only surviving African rhinos remain either in countries with strong wildlife market institutions (such as South Africa and Namibia) or in intensively protected zones.\”

A strong demand for rhino horn remains, and especially since about 2008, rhinos across Africa face a risk of illegal poachers. Here\’s a figure from the conservation group Save the Rhino showing the level of rhino poaching in South Africa:

Along with the existing choices of \”intensively protected zones\”–which implies costly and not-very-corruptible protectors–and allowing for private game preserves, the other option is to seek to undercut the black market for rhino horn with a legal market. Other more controversial options discussed at the Save the Rhinos website include de-horning rhinos, to make them less attractive to poachers, and perhaps even allowing legal sale of these rhino horns, to undercut the prices paid to poacher. Rhino horns are made of keratin, similar to the substance in fingernails and hair, and the horn could be removed every year or two. There are strong arguments on both sides of allowing legal sale of rhino horn: perhaps rather than undercutting the illegal market, it might also make it easier for poachers to sell their illegally obtained rhino horn. In the end, given that South Africa is now the home to most of the world\’s rhinos, I suspect that South Africa will end up making the decision about whether to proceed with these options.

Those interested in how property rights might be one of the tools for helping to protect endangered species might also want to check this post on \”Saving Jaguars and Elephants with Property Rights and Incentives\” (December 19, 2011).  

Analyzing Fair Trade

\”Fair Trade\” is often little more than a slogan. If you\’d like a look at the analysis and reality behind that slogan, as it\’s working out in the real world, a good starting point is \”The Economics of Fair Trade,\”  by Raluca Dragusanu, Daniele Giovannucci, and Nathan Nunn, in the Summer 2014 issue of the Journal of Economic Perspectives. (Full disclosure: I\’ve worked as Managing Editor of JEP since the first issue in 1987.)

Fair Trade is the practice whereby a nonprofit organization puts a label on certain products, certifying that certain practices were followed in the production of that product. Common required practices include standards for worker pay, worker voice, and environmental protection. The biggest of these certifying organization is Fairtrade International. There is a parallel label for the U.S. called Fair Trade USA. Other labelling standards, each with their own priorities, include Organic, Rainforest Alliance, and others. A producer who joins Fair Trade receives several benefits. When a Fair Trade producer sells to a Fair Trade buyer, they can receive a minimum price, which includes a premium now set at 20 cents per pound for coffee. Fair Trade buyers are also supposed to be more willing agree to long-term purchasing contracts, and to providing credit to producers.

At some level, Fair Trade and other certification programs are just a case of the free market at work. With the certification,  consumers who would are willing to pay something extra to purchase products produced in a certain way become able to find those products. A variety of evidence suggests that at least some consumers value this option. For example, in one study the researchers were able to add Fair Trade labels, or not, and alter prices, or not, for bulk coffee sold in 26 U.S. groceries. they found that at a given price, sales were 10% greater when the coffee was labeled as Fair Trade, and that demand for Fair Trade coffee was less sensitive to increases in price.

So what concerns or issues might be raised about Fair Trade? I\’ll list some of the issues here as I see them, based on evidence from the Dragusanu, Giovannucci, and Nunn paper. As they note, \”the evidence is admittedly both mixed and incomplete\”–so some of the concerns are tentative.

Fair Trade and other certification programs affect a relatively small number of workers.

The most important Fairtrade products, measured by number of producers and workers involved in growing them, are coffee (580,000 producers and workers covered), tea (258,000), and cocoa (141,000). Fair trade standard also cover smaller numbers of producers in seed cotton, flowers and plants, cane sugar, bananas, fresh fruit, and nuts. Obviously, compared to the total number of low-income agricultural producers in developing and emerging economies–measured in billions of workers–the share of production covered by Fair Trade certification is quite small.

Fair Trade does seem to provide higher prices and greater financial stability, at least when farmers can sell at the minimum price. 

A variety of small-scale studies in many countries suggest that Fair Trade farmers do earn more. However, there is a difficult problem of cause-and-effect here. If the more sophisticated and motivated farmers who are well-positioned by their crops and land to carry out Fair Trade practices are the ones who sign up, perhaps they would have been able to receive higher priced even without the certification. There are a variety of methods to adjust for differences across farmers: age of farmer, education of farmer; size of crop; before-and-after entering a certification program, and the like.  After such adjustments, a few studies no longer find that Fair Trade farmers earn more, but the most common finding remains that a price premium continues to exist.

The research in this area also points out that just because a producer is Fair Trade-certified does not mean that the producer can necessarily sell all of their crop as Fair Trade. The buyer determines what quantity of certified product to purchase at the Fair Trade price. In addition, while some buyers provide credit, there is some evidence that buyers who then sell to big firms like Starbucks and Costco are less likely to offer credit or long-term purchasing contracts. Again, farmers overall do seem to gain financial stability from Fair Trade certification, but what they gain in reality is often less than a simple recitation of the guidelines might suggest.

Fair Trade does seem to promote improved environmental practices. 

Again, the evidence is from small-scale studies in various countries, but Fair Trade certified producers do seem more likely to use composting, to use contouring and terraces to reduce erosion, to have systems for purifying runoff from fields, to make use of windbreaks and shade trees, and so on.

While Fair Trade helps producers, the effects on workers and work organizations is more mixed. 

Fair Trade organizations sometimes operate through cooperatives, in which farmers pass their output to the cooperative, which then negotiates the sales. A variety of studies find higher levels of tension between farmers and Fair Trade cooperatives, with farmers complaining about lack of communication and poor decision-making.

In addition, many producers of Fair Trade products hire outside workers, at least seasonally. As Dragusanu, Giovannucci, and Nunn write: \”The evidence on the distribution of the benefits of Fair Trade remains limited, but the available studies suggest that, within the coffee industry, Fair Trade certification benefits workers little or not at all.\” A couple of months ago, I blogged on a recent study making this point in \”Does Fair Trade Reduce Wages?\” However, these is also some evidence that in non-coffee crops, often grown in plantation agriculture, the certification standards can improve working conditions and reduce the use of child labor.

How might entry by producers affect Fair Trade and other certification programs in the long run? 

If producers who operate in a certain way can earn higher profits, then any economist will predict that more producers will choose to operate in that way. But as more producers enter and the supply of the product produced in that way rises, it will tend to drive down the market price, until the opportunities for higher profits are competed away. At least so far, this doesn\’t seem to have happened for Fair Trade. But as Dragusanu, Giovannucci, and Nunn write: \”This link between free entry and rents provides an interesting dilemma for certification agencies. On the one hand, they wish to induce the spread of socially and environmentally responsible production as much as possible. On the other hand, they may also wish to structure certain limits to entry so that they can continue to maintain higher-than-average rents for certified producers.\”

How might entry by additional certification organization affect Fair Trade in the long run?

There is considerable overlap between the various certification organizations: for example, 80% of the Fair Trade-certified producers are also certified as Organic producers. But multiple certifications mean multiple reports and audits, which can be a real burden for farmers in low-income countries. Some for-profit companies are starting their own certification programs, rather than deal with an outside certification organization. At some point, there is a risk that farmers become unwilling to deal with a plethora of organizations, and that consumers become cynical about whether many of these organizations represent something meaningful.

Using Twitter for Perceiving Unemployment in Real Time

The official unemployment rate predictions, released early each month, are based on a monthly survey. It\’s a good survey, even an excellent survey, but the data is inevitably a month old. In addition, any survey is somewhat constrained by the specific wording of its questions and definitions. Would it be possible to get a faster and reasonably accurate view of labor market conditions by looking at mentions of certain key terms on Twitter and other social media? The University of Michigan Economic Indicators from Social Media has started research program on this topic. The first research paper up at the side is \”Using Social Media to Measure Labor Market Flows,\” by Dolan Antenuccia,  Michael Cafarellab, Margaret C. Levenstein, Christopher Ré, and Matthew D. Shapiro, which is based on data from 19.3 billion Twitter messages sent between July 2011 and November 2013–which is about 10% of all the tweets sent in that time.

For those who want detail on how the official unemployment rate is calculated, the Bureau of Labor Statistics published a short memo on \”How the Government Measures Unemployment\” in June 2014. Basically, the government has been doing the Current Population Survey (CPS)
every month since 1940. In its current form. As BLS describes it:

There are about 60,000 eligible households in the sample for this survey. This translates into approximately 110,000 individuals each month, a large sample compared to public  opinion surveys, which usually cover fewer than 2,000 people. The CPS sample is  selected so as to be representative of the entire population of the United States … Every month, one-fourth of the households in the sample are changed, so that no  household is interviewed for more than 4 consecutive months. After a household is  interviewed for 4 consecutive months, it leaves the sample for 8 months, and then is  again interviewed for the same 4 calendar months a year later, before leaving the sample  for good. As a result, approximately 75 percent of the sample remains the same from  month to month and 50 percent remains the same from year to year. This procedure  strengthens the reliability of estimates of month-to-month and year-to-year change in the data.  Each month, highly trained and experienced Census Bureau employees contact the 60,000 eligible sample households and ask about the labor force activities (jobholding and job seeking) or non-labor force status of the members of these households during the  survey reference week (usually the week that includes the 12th of the month).

Although the headline unemployment rate and total jobs number gets most of the attention, the survey also tries to explore whether those not looking for jobs are \”discouraged\” workers who would actually like a job, but have given up looking, or whether they are part-time workers who would prefer a full-time job.

At present, perhaps the main source of data on labor markets that comes out more frequently than the unemployment rate itself is the data on initial claims for unemployment insurance, which comes out weekly (for example, here). However, this data can be a an imperfect indicator–or as economists would say, a \”noisy\” indicator–of the actual state of the labor market. Not everyone who becomes unemployed applies for unemployment insurance or is eligible for it, and many of the long-term unemployed are no longer eligible for unemployment insurance. So the practical question about using Twitter or other social media to look at labor markets is not whether offer a perfect picture, but whether the information from such estimates is less \”noisy\” and more useful than the data from the initial claims for unemployment insurance. \\

The University of Michigan researchers searched the 19.3 billion tweets for terms of four words or less related to job loss. Some examples would include four-word blocks of text that include the words axed, canned, downsized, outsourced, pink slip, lost job, fired job, been fired, laid off, and unemployment. Some experimentation and analysis is involved in choosing terms. For example, it turned out that \”let go\” was used much more frequently than any other term on this list, presumably because many there were many four-word blocks of text that used \”let\” and \”go\” but weren\’t related to labor market issues.

Each week, the Michigan group plans to publish a comparison between the official unemployment insurance claims data and a prediction based purely on its Twitter-based methodology. Here\’s the current figure:

As you can see, the patterns are similar, which is somewhat remarkable. It shows that social media content provides a similar outcome to the official statistics. The patterns are not identical, which is unremarkable, because they are after all measuring different things. The interesting question then becomes: Is there some additional information or value-added to be gained about the state of the labor market from looking at the social-media based index?

In certain specific cases, the answer seems clearly to be \”yes.\” For example, the authors explain that the official date on unemployment insurance claims showed a big drop in September 2013 that occurred because of a data processing issue in California–that is, it wasn\’t a real effect. The social media prediction shows no decline. More broadly, the authors look at the predictions from market experts a few days before the data comes out on  unemployment insurance claims, and they find that the social media measure would improve these predictions.

The researchers are looking at how social media might reflect various various other measures of labor markets, including job search, job postings, and how labor markets react to short-term events like Hurricane Sandy. Of course, the goal is to develop methods that give a reasonably reliable real-time sense of how the economy is evolving based on immediately available data

 For those interested in doing their own research project based on collecting publicly available data from the web, a useful overall starting point is the article by Benjamin Edelman, \”Using Internet Data for Economic Research,\” in the Spring 2012 issue of the Journal of Economic Perspectives, where I have worked as Managing Editor since the first issue back in 1987. As with all JEP articles, it is freely available on-line compliments of the American Economic Association. Social science researchers are busily writing programs that collect data on search queries, on how prices change in a wide variety of databases, and much more.

Homeownership Rates Come Back Down the Mountain

Back in the mid-1990s, I thought of the U.S. homeownership rate as fairly constant, holding at about 64-65% most of the time. In the fourth quarter of 1995, for example, the homeownership rate was a bit above this range at 65.1%. But looking back at Census Department data for the fourth quarter of various years (see Table 14 here), the homeownership rate had been 64.1% in 1990, 63.5% in 1985, 65.5% in 1980, 64.5% in 1975, 64.0% in 1970, and 63.4% in 1965.

Since 1995, U.S. homeownership rates have climbed a mountain–speaking graphically–and have now come back down. Here\’s a figure from the Census Bureau\’s July 29 report on \”Residential Vacancies and Homeownership in the Second Quarter 2014.\” The homeownership rate checked in at 64.7% in the second quarter of 2014.

Here\’s a slightly different perspective from the same report, looking at the vacancy rate–that is what share of rental housing and of homes are vacant.

At about the same time that the homeownership rate was rising in the first half of the 1990s, the vacancy rate for homes was also rising–which suggests that an enormous boom in residential construction was occurring at the time.

It\’s worth remembering that as homeownership rates climbed up one side of the mountain from about 1995 to 2004, the change was viewed as a success by  both parties. Bill Clinton had a National Homeownership Strategy  which pushed to make it easier for people with lower incomes to own a home. As Clinton said in announcing the initiative:

You want to reinforce family values in America, encourage two-parent households, get people to stay home? Make it easy for people to own their own homes and enjoy the rewards of family life and see their work rewarded. This is a big deal. This is about more than money and sticks and boards and windows. This is about the way we live as a people and what kind of society we\’re going to have. …  The goal of this strategy, to boost home ownership to 67.5 percent by the year 2000, would take us to an all-time high, helping as many as 8 million American families across that threshold. … Our home ownership strategy will not cost the taxpayers one extra cent. It will not require legislation. It will not add more Federal programs or grow Federal bureaucracy. It\’s 100 specific actions that address the practical needs of people who are trying to build their own personal version of the American dream, to help moderate income families who pay high rents but haven\’t been able to save enough for a downpayment, to help lower income working families who are ready to assume the responsibilities of home ownership but held back by mortgage costs that are just out of reach, to help families who have historically been excluded from home ownership.

The Clinton initiative, together with the booming U.S. economy in the second half of the 1990s,  reached that goal of 67.5% homeownership rate by the year 2000. When George W. Bush became president, he pushed for an \”ownership society,\” with policies to help people with down payments on a home and increase the number of minority homeowners. As Bush  said in a 2003 speech:

\”This Administration will constantly strive to promote an ownership society in America. We want more people owning their own home. It is in our national interest that more people own their own home. After all, if you own your own home, you have a vital stake in the future of our country.\”

When the homeownership rate peaked at 69.4% in the second quarter of 2004, and for some months afterward, there was strong bipartisan support for the policies that had raised homeownership rates. At the time, existing homeowners were largely delighted as well with the swelling price of their homes.

Of course, the underlying problems have now become obvious. It\’s hard to oppose policies that gives low-income people a better chance to own a home. But if those policies involve encouraging those with lower incomes to take out subprime mortgages, so that the people you are claiming to help will be actually be carrying overly large debt burdens and become highly vulnerable to a downturn in housing prices, then this way of pushing for higher rates of homeownership is a poisoned chalice. I\’m very supportive of building institutions and laws that will make it easier for those with low and medium incomes to accumulate financial and nonfinancial assets, including a home. But let\’s focus on ways of encouraging actual saving, not ways of encouraging excessive borrowing.

US Becomes Oil and Gas Production Leader

OK, I admit that it\’s arbitrary to compare countries according to their oil and gas production, setting aside coal, hydroelectric, nuclear, and renewables like solar and wind. Still, as someone who started paying attention to economic issues during the OPEC-related oil price shocks of the 1970s, this figure shows an outcome that I never expected to see. Taking oil and gas together, the U.S. has now surpassed Russia and Saudi Arabia as the world\’s leading producer.

This figure was produced by the Stanford Institute for Economic Policy Research (SIEPR) as part of it annual \”Facts at a Glance\” chartbook. For purposes of this comparison, natural gas has been converted into an energy-equivalent amount of oil: specifically, 5,800 cubic feet is equal to about 1 barrel of oil.

Of course the economic consequences of being the largest energy producer will be different for the U.S. than for Russia or Saudi Arabia. For example, the enormous US economy uses more energy than it produces, and thus remains an energy importer, while the economies of Saudi Arabia and Russia depend on energy exports. But before I try to figure out what it all means. I need to spend some time just wrapping my head around the idea of the U.S. as the world\’s leading oil and gas producer.