The Federal Reserve: Was it a Mistake?

The Federal Reserve reached its 100th anniversary this year, which can be both a time for thinking about what has happened, or for thinking about what might have been.

If you would like an overview of the substantial shifts and changes in the Federal Reserve over time, a useful starting point is a symposium in the Fall 2013 issue of the Journal of Economic Perspectives. For example, Ben Bernanke writes about the history of the Fed with an emphasis on five \”great\” episodes: \”the Great Experiment of the Federal Reserve’s founding, the Great Depression, the Great Inflation and subsequent disinflation, the Great Moderation, and the recent Great Recession.\” Gary Gorton and Andrew Metrick look at \”The Federal Reserve and Panic Prevention: The Roles of Financial Regulation and Lender of Last Resort.\”  Julio Rotemberg considers the goals and tactics that the Federal Reserve has used over time in seeking to ameliorate swings of the business cycle. Barry Eichengreen looks at when the Fed has acted because of concerns over international consequences, and predicts that the international dimension of monetary policy will play a larger role for the Fed in the future. Full disclosure: As usual, all JEP papers are freely available on-line, courtesy of the American Economic Association. My job as Managing Editor of the JEP has been paying the household bills since 1986.)

But if you are looking for what is in some way an even bigger-picture discussion of the Fed, the November 2013 issue of Cato Unbound tackles the issue: Was the Fed a mistake? the main essay is written by Gerald P. O\’Driscoll, and then comments follow from Lawrence White, Scott Sumner, and Jerry Jordan.  O\’Driscoll summarizes a theme that runs through the essays this way: \”The 19th century economic journalist and Economist editor Walter Bagehot thought it would have been better if the Bank of England had never been created. In Lombard Street, Bagehot argued that a decentralized system of many banks of approximately equal size would have been preferable. … Bagehot’s famous dictum that Bank of England must lend freely at penalty interest rates in times of panic was a second-best solution to a problem caused by centralizing reserves in that institution.
Having said that, Bagehot argued that, once created, it was not possible to abolish central banking.\”
Similarly, O\’Driscoll argues that while certain banking reforms were needed back in 1913, the creation of a central bank was a mistake–albeit a mistake that it may not now be possible to reverse.

What does the case against the Fed look like?

For substantial portions of the Fed\’s history, its main role has been to help the federal government borrow: obvious examples include during and after World War I, during and after World War II, and during and after the Great Recession. Indeed, O\’Driscoll argues that one main reason for the creation of central banks is that they enable high levels of government borrowing, which then impose later economic costs.

At other times in Fed history, its actions have contributed to sometimes quite severe swings in business cycles. For example, misguided Fed policies probably contributed to the large business cycle swings of the 1920s; to the Great Depression of the early 1930s and the steep recession of 1937-38; to the \”stagflation\” combination of inflation and recession in the 1970s; and to the climate of overborrowing that created fertile soil for the Great Recession. By O\’Driscoll\’s reckoning, the Fed has had about 30 years out of its first century when it wasn\’t either just a mechanism for high government borrowing or wasn\’t enacting deeply misguided policies: the 1950s, and the period from the mid-1980s to the mid-2000s.

In short, this indictment runs, it is at a minimum not obvious from the historical record that the Fed has offered more benefits than costs. Of course, a natural response to this charge to say something like: \”Well, I just can\’t imagine not having a central bank. Everyone who\’s anyone has a central bank.\” But a lack of imagination in thinking about alternative monetary institutions is not much of a defense of the current central banking institutions.

O\’Driscoll argues that a modern banking system does need constraints, so that the banks don\’t run amok in overborrowing and create financial instability. He suggests a commodity standard, like a gold standard, as such a constraint. My own sense is that a modern central bank–that is, not the Fed as it existed in the 1920s and 1930s–is clearly preferable to a commodity standard. Moreover, in the extraordinarily complex world of modern finance, I don\’t think a commodity standard would be anywhere close to a sufficient level of financial regulation.

But there is also no reason to expect that the Fed will stand still in its goals and tactics. The history of the Fed over the last 100 years shows lots of change and evolution over time. Back as recently as 1994, for example, the Federal Open Market Committee just took actions to adjust the federal funds interest rate, but did not release any statement explaining why. Of course, the Fed has also shown a high capacity for innovation (for better or worse!) during the Great Recession and since then, by announcing that a policy of near-zero interest rates would be sustained into the future and through its quantitative easing policies of buying Treasury debt and mortgage-backed securities directly. I don\’t expect the Fed to be abolished. But I wouldn\’t be surprised if the Fed in a decade or two is operating in substantially new and different ways.

A Good Comfortable Road Until Compelled to Believe Otherwise

Last summer my wife and I, together with our children and my parents and two guides, did a week-long canoe trip down a wild and scenic stretch of the Missouri River in Montana. One day, we saw a couple of day-trippers on the river. The rest of the week there were no people, no cars, no electronic devices, and thanks to the guides, no meal planning, prep, or clean-up. We were following the route taken by William Clark and Meriwether Lewis in their exploration from 1804-1806, which gave us an excuse to sample some of their journals. At one campsite, we were near the place where Clark wrote in his journal a bit of philosophy that I\’ve been mulling over as I seek to appreciate my blessings fully during this holiday season. Clark wrote:  

\”[A]s I have always held it little Short of criminality to anticipate evils I will allow it to be a good comfortable road untill I am compelled to beleive otherwise.\” 


The full story is that the expedition had stopped what they called Bullwhacker Creek, and Clark climbed up the nearest hill and saw snow-capped mountains in the distance. He (mistakenly) thought they were the Rocky Mountains, and ruminated in his diary entry of May 26, 1805, about the joy of being so close to reaching the mountains and the knowledge of hardships yet to come. Here\’s a fuller quotation: 

\”I crossed a Deep holler and assended a part of the plain elivated much higher than where I first viewed the above Mountains; from this point I beheld the Rocky Mountains for the first time with certainty, I could only discover a fiew of the most elivated points above the horizon, the most remarkable of which by my pocket compas I found bore S. 60 W. those points of the rocky Mountain were covered with Snow and the Sun Shown on it in such a manner as to give me a most plain and satisfactory view. whilst I viewed these mountains I felt a secret pleasure in finding myself so near the head of the heretofore conceived boundless Missouri; but when I reflected on the difficulties which this snowey barrier would most probably throw in my way to the Pacific Ocean, and the sufferings and hardships of my self and part in them, it in some measure counter ballanced the joy I had felt in the first moments in which I gazed on them; but as I have always held it little Short of criminality to anticipate evils I will allow it to be a good comfortable road untill I am compelled to beleive otherwise.\”


I\’m not much of a spiritual person, perhaps even less so than your average economist. But I do think that it is easy to spend an inordinate amount of one\’s time counting up problems and slights and injustices, past, present and future. In comparison, it requires (at least for me) some discipline and effort to count one\’s blessings. But the blessings matter more. I hope this holiday season and the year to come can feel like a good comfortable road for you.

Description: https://mail.google.com/mail/u/0/images/cleardot.gif

Carbon Capture and Storage: An Update

If carbon capture and storage could be accomplished at a reasonable cost–and that \”if\” is absolutely enormous–the implications for the question of climate change are extraordinary. At least some of any needed reduction in emissions could happen while still burning fossil fuels. The Australian-based Global CCS Institute describes the potential and the issues facing carbon capture and storage in its annual report, The Global Status of CCS–2013. For the record, this Institute was established in 2009 with initial funding from the government of Australia, and its members \”include national governments, global corporations, small companies, environmental non-government organisations, research bodies and universities.\” It seems to me a bit of a cheerleader for the potential of carbon capture and storage approaches, but it still offers a useful overview of where the technology stands at present. 

The report envisions a future in which CCS technology would account for perhaps 15% of reduced carbon emissions by 2050, compared with the baseline that would otherwise exist. The argument is that if carbon emissions are going to be cut, then CCS technology will be cost-effective compared with some of the other options. The report argues (citations omitted): 

\”CCS has strong potential to be cost competitive in a low–carbon future. The International Energy Agency (IEA) has estimated that the exclusion of CCS as a technology option in the electricity sector alone would increase mitigation costs by around US$2 trillion by 2050. This is because many alternatives to CCS as a low–emissions technology in the electricity sector are more expensive. …
Beyond the electricity sector, it is unlikely that energy–related and process CO2 emissions can be eliminated without CCS. This is because CCS is the only large–scale technology available to make deep emissions cuts in several industrial sectors (such as iron and steel and cement). Industrial sector emissions account for more than 20 per cent of current global CO2 emissions. It follows that the widespread deployment of CCS in the power and industrial sectors in the coming decades is imperative to achieving a low–carbon energy future at least cost.\”

I tend to think of carbon capture and storage as a largely unproven technology. Here, the report dances a delicate line. It argues on one hand that the basics of CCS are well-understood. But it also argues that very large changes are needed if CCS is to be a major player in future carbon reductions. Here\’s a sample of the encouraging discussion:

\”CCS is often mistakenly perceived as an unproven or experimental technology. In reality, the technology is generally well understood and has been used for decades at a large scale in certain applications. For example:

  • large–scale CO2 separation is undertaken as a matter of routine in gas processing and many industrial processes
  • CO2 pipelines are an established technology, on land and under the sea
  • large–scale injection and geological storage of CO2 has been safely performed in saline reservoirs for more than 15 years, and in oil and gas reservoirs for decades.

There are currently 12 operational large–scale CCS projects around the world, which have the capacity to prevent 25 million tonnes a year (Mtpa) of CO2 from reaching the atmosphere. The key technical challenge for widespread CCS deployment is the integration of component technologies into successful large–scale demonstration projects in new applications such as power generation and additional industrial processes.\”

There\’s a lengthy discussion of the large-scale projects that exist, and others that might be considered. There is also a lengthy discussion of various hurdles that need to be overcome. Here are some examples:

\”More than 90 per cent of the overall cost of CCS can be driven by expenses related to the capture process. … There is a variety of R&D programs focused on developing new and more cost-effective capture technologies. For example, the US DOE’s National Energy Technology Laboratory (NETL) has R&D programs designed to explore new solvents, membranes, and sorbents that could be used for CO2 capture. These programs focus strongly on developing technologies at the bench scale, and then funding their transition through to pilot scale.\”

\”For CCS to meet the longer term climate challenge of restricting global warming to less than 2°C, the estimated magnitude of the CO2 transportation infrastructure that will need to be built in the coming 30–40 years is 100 times larger than currently operating CO2 pipeline networks.\”

\”[T]here is an urgent need for policies and funded programs that encourage the exploration and appraisal of significant CO2 storage capacity.\”

\”Between 2008 and 2012, \’policy leader\’ governments committed more than US$22 billion in direct funding to large–scale CCS demonstration projects. … In late 2009, the scale of funding support being considered globally exceeded US$30 billion. However, as the global financial crisis deepened, some funding mechanisms were cancelled before they could be legislated … To date, not all of the available funding has been taken up. In several jurisdictions, some of the funding is no longer available due to legislative limits or changing government priorities. In some situations, the value of available government commitments has decreased due to the structure of funding mechanism or the funding is difficult to access due to the design of programs. In total, funding commitments have been reduced by more than US$7 billion …\”

In short, carbon capture and storage technology needs lots of basic research on the technologies for capturing carbon, and on the infrastructure for transporting it, and for where it would be stored. Meanwhile, funding commitments are dropping. The report sums up the situation this way:

\”CCS is at something of a crossroads. For those immersed in a highly challenging environment with often slow-moving funding and policy commitments, it would be very easy to put the commercial deployment of CCS in the ‘too difficult’ basket. However, for those with an eye to the very real challenges of creating a sustainable low–carbon energy future, the commercial deployment of CCS is non-negotiable. The value proposition for CCS does exist, but it is complex and challenging to communicate …\” 

I finished the report feeling that the hurdles in front of carbon capture and storage becoming commercially viable were every bit as large as I had previously thought, if not larger. But there is some interesting work happening: a report in the December 2013 of Scientific American described some research in which carbon dioxide is injected into underground basalt formations, where minerals would interact with the carbon dioxide to turn the carbon into a solid–thus eliminating any risk it might leak out in the future.  The risk of adverse climate change scenarios from higher carbon emissions is substantial enough that no possible solutions, even partial solutions, should be neglected.

The TSA: Mistakes Were Made

As so many of us pass through airports at this time of year, contemplate the Transportation Safety Administration. The politics of creating the TSA were straightforward. The terrorist attacks of September 11, 2001, occurred. By November 2001, the Aviation and Transportation Security Act was passed into law, creating the TSA. This looks like a classic example of that old benighted syllogism of public policy: \”Something must be done. Here is something. Therefore, it must be done.\” Chris Edwards reviews the history and makes the case for \”Privatizing the Transportation Security Administration\” (Cato Institute, Policy Analysis No. 742, November 19, 2013).

\”TSA’s main activity is operating security screening at more than 450 commercial airports across the nation. The agency also runs the Federal Air Marshal Service (FAMS), analyzes intelligence data, and oversees the security of rail, transit, highways, and pipelines. TSA has 62,000 employees and an annual budget in 2013 of $7.9 billion.\” 

No other high-income country has put airport screening under central government control. Edwards writes: \”More than 80 percent of Europe’s commercial airports use private screening companies, including those in Britain, France, Germany, and Spain. The other airports in Europe use their own in-house security, but no major country in Europe uses the national government’s aviation bureaucracy for screening. Europe’s airports moved to private contracting during the 1980s and 1990s after numerous hijackings and terrorist threats, and it has worked very well. Canada also uses private screening companies at its commercial airports, and some airports also use private firms for general airport security. After 9/11, the government created the Canadian Air Transport Security Authority, which oversees screening at the country’s 89 commercial airports.136 But the
screening itself is carried out by three expert private firms—G4S, Garda, and Securitas—which are each responsible for a group of particular Canadian airports.\”

There\’s an old but useful rule for deciding when a certain desired activity should be carried out by government itself or instead if the government should pay others to do it: \”Government should steer, not row.\” Or to put it another way, do I feel more secure in a situation where undercover government inspectors are continually testing private airport security companies, who know they are likely to lose their contract if they fail to perform? Or in a situation where I\’m depending on one branch of government to inspect the security efforts of another branch of government, with all the inter-bureaucratic incentives not to embarrass each other, and where no one in a position of authority is going to lose their job or their annual bonus if security fails to perform?

In addition to pure security issues, a contracting approach to airport security has other possible benefits. One is that a contracting firm has better incentives to provide friendly and rapid customer service than does a government agency that can\’t be fired. Another is that local control allows more rapid adjustment of schedules across the 450 airports for days and times when demand is high or low, compared with a rule in which all such adjustments need to be cleared with a faraway bureaucracy. Improved methods of providing security–either less time or different equipment–are more likely to emerge from private firms competing for work at airports across the county than from a federal bureaucracy.

Not unexpectedly, Edwards cites a litany of reports that suggest poor management at the TSA. Out of 240 federal agencies, the TSA ranked 232nd in employee satisfaction. A GAO report found large increases in employee misconduct at TSA. A former TSA chief has reported that the agency is \”hopelessly bureaucratic.\” The TSA bought 207 \”puffer\” machines to detect explosives, but then decided they didn\’t  work well and mothballed them. Before TSA bought the full-body scanners, GAO requested that ti do a cost-benefit study, but none was done. In June 2011, a federal judge required that such a study be done, but it still hasn\’t been done. At least one outside study suggests that the machines would fail a cost-benefit analysis quite dramatically.

Maybe we needed more airport screeners in the aftermath of 9/11, although I\’m not familiar with any evidence on the point. But did we need the number of airport security screeners to more than double from 16,000 before the TSA to more than 40,000 immediately after, and 53,000 today?

The 2001 legislation allowed airports to opt out, and five immediately did so–with San Francisco being the largest. Now there are 16 airports that have opted out, and others are applying to do so. There\’s no evidence that the airports which have opted out of the TSA are any less secure. And there aren\’t any airports moving the other direction, from privatized airport security to join up with the TSA. If the TSA can\’t be abolished outright, at least such opt-outs can be allowed and encouraged.

Real Tree or Artificial Tree?

My family always had real Christmas trees when I was growing up. I\’ve always had real trees as an adult. Living in my own little bubble, it thus came as a shock to me to learn that, of the households that have Christmas trees, over 80% use an artificial tree, according to Nielsen survey results commissioned by the American Christmas Tree Association  (which largely represents sellers of artificial trees). But in a holiday season where the focus is often on whether we are naughty or nice, what choice of tree has greater environmental impact?

There seem to be two main studies often quoted on this subject: \”Comparative Life Cycle Assessment (LCA) of Artificial vs. Natural Christmas Tree,\” published by a Montreal-based consulting firm called ellipsos in February 2009, and \”Comparative Life Cycle Assessment of an Artificial Christmas Tree and a Natural Christmas Tree,\” published in November 2010 by a Boston consulting firm called PE Americas on behalf of the aforementioned American Christmas Tree Association.Both studies assume the artificial tree is manufactured in China and transported to North America.  (If readers know of other recent published studies, please send me a link!)

Note: This post first appeared on December 24, 2012. It has been slightly edited.)

Here are some of the main messages I take away from these studies:

1) One artificial tree has greater environmental impact than one natural tree. However, an artificial tree can also be re-used over a number of years. Thus, there is some crossover point, if the artificial tree is used for long enough, that its environmental effect is less than an annual series of trees. For example, the ellipsos study finds that an artificial tree would need to be used for 20 years before its greenhouse gas effects would be less than those of an annual series of natural trees. The PE Americas study offers a wide range of scenarios, and summarizes, but here is the situation \”for the base case when individual car transport distance for tree purchase is 2.5 miles each way. Because the natural tree provides an environmental benefit in terms of Global Warming Potential when landfilled, and Eutrophication Potential when composted or incinerated, there is no number of years one can keep an artificial tree in order to match the natural tree impacts in these cases. …  For all other scenarios, the artificial tree has less impact provided it is kept and reused for a minimum between 2 and 9 years, depending upon the environmental indicator chosen.\”

2) The full analysis needs to look at effects across all the full life-cycle of the tree, whether natural or artificial. This seems to involve the following steps.

  • Under what conditions is the tree manufactured or cultivated, with what use of energy, fertilizer, and logging methods? 
  • By what combination of transportation mechanisms is the finished tree moved to the home? A substantial share of artificial trees are manufactured in China and then shipped to North America.
  • What are the different issues in use of the tree, including use of water and emissions of fumes?
  • What is the end-of-life for the tree? For example, the carbon in a natural tree will be stored for some decades if the tree goes into a landfill, but not if if is composted or incinerated.

3) The full analysis also needs to look at a range of possible effects. For example, the PE America study looked at \”global warming potential (carbon footprint), primary energy demand, acidification potential, eutrophication potential, and smog potential.\” Here\’s a figure showing 14 categories of analysis from the ellipsos study, with a comparison between natural and artificial trees on a number of dimensions.

The ellipsos study sums up this way: \”When aggregating the data in damage categories, the results show that the impacts for human health are approximately equivalent for both trees, that the impact for ecosystem quality are much better for the artificial tree, that the impacts for climate change are much better for the natural tree, and that the impacts for resources are better for the natural tree …\”

4) In the context of many other holiday and everyday activities, the environmental effects of the tree are small. The studies offer some comparisons of the environmental effects of the tree compared with the electricity used to light the tree, the driving by a household to pick up the tree, and even the environmental effect of the tree stand.
  
For example, in comparing Primary Energy Demand for the tree and the energy demand for lighting the tree. For an artificial tree, the PE Americas study reports: \”The electricity consumption during use of 400 incandescent Christmas tree lights during one Christmas season is 55% of the overall Primary Energy Demand impact of the unlit artificial tree studied, assuming the worst‐case scenario that the artificial tree is used only one year. For artificial trees kept 5 and 10 years respectively, the PED for using incandescent lights is 2.8 times and 5.5 times that of the artificial tree life cycle.\” For a natural tree: \”The life cycle Primary Energy Demand impact of the natural tree is 1.5 ‐ 3.5 times less (based on the End‐of‐Life scenario) than the use of 400 incandescent Christmas tree lights during one Christmas season.\”

In comparing the environmental effects of driving with those of the tree, ellipsos writes: \”Due to the uncertainties of CO2 sequestration and distance between the point of purchase of the trees and the customer’s house, the environmental impacts of the natural tree can become worse. For instance, customers who travel over 16 km from their house to the store (instead of 5 km) to buy a natural tree would be better off with an artificial tree. … [C]arpooling or biking to work only one to three weeks per year would offset the carbon emissions from both types of Christmas trees.\”

The PE Americas report strikes a similar theme: \”Initially, global warming potential (GWP) for the landfilled natural tree is negative, in other words the life cycle of a landfilled natural tree that is a GWP sink. Therefore, the more natural trees purchased, the greater the environmental global warming benefit (the more negative GWP becomes). However, with increased transport to pick up the natural tree, the overall landfilled natural tree life cycled becomes less negative. When car transport becomes greater than 5 miles (one‐way), the overall life cycle of the natural tree is no longer negative, and there is a positive GWP contribution.\” 

Even the tree stand for a natural tree has an environmental cost that can be considered in the same breath with the costs of a natural tree. PE Americas: \”The tree stand is a significant contributor to the overall impact of the natural tree life cycle with impacts ranging from 3% to 41% depending on the impact category and End‐of‐Life disposal option.\”

I would add that the environment effect of the ornaments on the trees may be as large or greater than the effect of the tree itself. Data from the U.S. Census Bureau shows that America imported $1 billion in Christmas tree ornaments from China (the leading supplier) between January to September 2012, but only $140 million worth of artificial Christmas trees. Thus, spending on ornaments is something like six times as high as spending on trees. The choice of what kind of lights on the tree, or whether to drape the house and front yard with lights, is a more momentous environmental decision than the tree itself.

Of course, these kinds of comparisons don\’t even try to compare the environmental cost of the tree with the cost of the presents under the tree, or the long-distance travel to attend a family gathering. Thus,  the PE Americas study concludes: \”Consumers who wish to celebrate the holidays with a Christmas tree should do so knowing that the overall environmental impacts of both natural and artificial trees are extremely small when compared to other daily activities such as driving a car. Neither natural nor artificial Christmas tree purchases constitute a significant environmental impact within most American lifestyles.\” Similarly, ellipsos writes: \”Although the dilemma between the natural and artificial Christmas trees will continue to surface every year before Christmas, it is now clear from this LCA study that, regardless of the chosen type of tree, the impacts on the environment are negligible compared to other activities, such as car use.\”

Certainly, celebrations at holidays and big events can sometimes be exorbitant and over the top. But the use of a Christmas tree, and the choice between a natural tree or an artificial tree, is a small-scale luxury. If the environmental issue is bothering you, even knowing these facts, make a resolution to use your artificial tree for a few more years, rather than replacing it, or to save some energy in January by driving less or being more vigilant about turning off unneeded lights. Gathering around the tree should be one less reason for moralizing around the holidays, not one more. So celebrate with good cheer and generous moderation.

Global Supply Chains and Rethinking International Trade

My mental model of what is exchanged in international trade is getting an attitude adjustment. Traditional discussions of international trade are often based on examples of countries exporting products which are then consumed in other countries: cars, computers, wine, clothing, and so on. But in the modern economy, what is often exported across national borders is an intermediate good, which is then used in production of other intermediate goods and exported again, so that the ultimate product was produced as part of a global supply chain reaching across countries. The collection of essays in Global value chains in a changing world, edited by Deborah K. Elms and Patrick Low, offers a nice overview. The book was published by the World Trade Organization, together with the Fung Global Institute and Nanyang Technological University.

The book has 16 chapters, covering various aspects of global supply chains like how to measure the value-added within each country, how to manage these production processes, and how low- and medium-income countries can find a niche for themselves to these production chains. Here, I\’ll focus on the nice overview essay by Richard Baldwin. He begins (references and footnotes omitted for readability, as usual):

\”Global supply chains have transformed the world. They revolutionized development options facing poor nations; now they can join supply chains rather than having to invest decades in building their own. The offshoring of labour-intensive manufacturing stages and the attendant international mobility of technology launched era-defining growth in emerging markets, a change that fosters and is  fostered by domestic policy reform. This reversal of fortunes constitutes perhaps the most momentous global economic  change in the last 100 years.  Global supply chains, however, are themselves rapidly evolving. The change is in  part due to their own impact (income and wage convergence) and in part due to  rapid technological innovations in communication technology, computer integrated  manufacturing and 3D printing.\” 

Baldwin argues that these global supply chains represent a profoundly different form of international trade. In what he calls \”the first great unbundling\”–that is, the growth of world trade from the early 19th century through much of the 20th century–the gains from trade results from cheaper transportation costs combined with innovation, specialization, and economies of scale.The advantages of complexity and scale seemed to be coordinated best when production was clustered in a relatively few places.   The result was that economic activity became clustered: for example, in the global North rather than the global South, and in certain regions and metropolitan areas rather than others.

The \”second great unbundling\” of global supply chains is driven by different factors: declining costs of communication and information technology made it possible to coordinate economic activities happening in many different locations, and the large differences in wages that had built up over the decades across the countries of the world meant that splitting up work could reduce costs. \”Some of the coordination costs are related to communication, so the `coordination glue\’ began to melt from the mid-1980s with ICT’s melding of telecommunications, computers and organizational software. … While technology transfer is an ancient story (gunpowder), ICT facilitated control that reduced the costs and risks of combining developed-economy technology with developing-nation labour.\”  In this form of international trade, economic activity becomes less clustered, and expertise spreads out.

Baldwin describes the result this way: There are “headquarter” economies (whose exports contain relatively little imported intermediates) and “factory” economies (whose exports contain a large
share of imported intermediates). …  The global supply chain is really not very global – it’s regional
… [in] what I call Factory Asia, Factory North America, and Factory Europe.\”

As one measure of these shifting patterns, Baldwin points out that the G-7 economies–the United States, Canada, France, Germany, Italy, Japan, the United Kingdom–represented 20% of global output in 1820, 40% of global output in 1870, and peaked at two-thirds of global output in 1988, but have now fallen back to 50% of global output.

Baldwin also argues: \”Internationalizing supply chains also internationalized the complex two-way flows that used to take place only within factories.\” Thus, decisions about how to invest that used to happen inside plant or perhaps in a certain local region are now decisions about foreign direct investment. Transportation of parts and supplies used to happen inside a company: now much of that transportation is outsourced to logistics companies that provide shipping services. Services like legal and finance that used to happen within a company, and often even within the same building, now often become international transactions. Decisions about how to move intellectual property into the production line used to involve people from neighboring buildings, but now it involves decisions about sharing intellectual property and appropriate training across international borders.

And the political economy of trade changed, too. In the older forms of international trade, there was always a temptation to protect domestic industries by shutting out imports. But in global supply chain trade, there is an incentive to make it ease for imports to arrive as part of the global value chain, and an expectation that other countries will behave in the same way.

This process of global value chains is really just getting underway, and it may turn out to be more of a kaleidoscope of shifting patterns than a unidirectional movement. There will continue to be advantages of proximity to suppliers of inputs and to sources of ultimate demand. On the other side, continuing improvements in information and communications technology will tend to encourage further separation of economic activity, along with the growth of products that are valuable and technology-intensive but are small and have low cost to ship–or in the case of  products like software or certain services, the \”products\” can be shipped electronically at almost zero cost. These forces will vary for different products. They will change with new developments in technology, like computer-guided manufacturing or equipment that can be operated by specialists who are geographically far away (remote surgery, anyone?). The forces will also change according to the ways that areas specialize in different kinds of production.

My sense is that when economic historians 50 or 100 years from now look back at this time period,  our daily policy concerns like the slow recovery from the Great Recession, health care finance, the euro, and others will have faded with time. Instead, the creation of global supply chains and the transformation of global economic patterns, along with what it means for countries and workers, will seem like the defining economic event of our time.

The CEO Investment Cycle

In the world of economic theory, and sometimes in the minds of employees, the chief executive officer of a company is an all-seeing, all-knowing planner with a ruthless eye on the bottom line. However, it appears that the average CEO has a typical career trajectory: spend a few years cleaning up the past investment mistakes of the firm, then start expanding the firm\’s investments again, and finish off by over-expanding–so that the next income CEO has a bunch of investment mistakes to clean up, and the cycle can start anew. Yihui Pan, Tracy Yue Wang, and Michael S. Weisbach provide the evidence in \”CEO Investment Cycles,\” which was published as national Bureau of Economic Research Working Paper #19330. NBER working papers are not freely available online, although some readers will have access through library subscriptions, but a readable summary of the paper is available here.

Pan, Wang, and Weisbach collect data on 5,420 CEOs who took office in 2,991 publicly traded US firms between 1980 and 2009. They gather data on whether a departing CEO does so for reasons of age or health, or under some particular pressure. They also have data on whether the CEO is hired from inside or outside the firm. They don\’t look at the profits reported by firms, in part because profit data is at least as much an outcome of past decisions rather than current ones, and in part because profit data is often so massaged by corporate tax attorneys that it actual economic meaning can be unclear. Instead, they look at the annual patterns of investment by firms.

\”We estimate the magnitude of the CEO cycle in terms of the differences in disinvestment, investment, and firm growth, between the first three years of a CEO’s tenure and the later years, holding other factors constant. The magnitude of the changes in firm investment and growth over the CEO cycle is substantial. For example, the annual investment rate (investment-to-capital-stock ratio) tends to be 6 to 8 percentage points lower and the asset growth rate tends to be 3.2 percentage points lower in the first three years of a CEO’s tenure than in his later years in office. Given that the median investment rate in our sample is 24% and the median asset growth rate is 7.6%, the differences in investment and growth between the earlier and the later parts of the CEO cycle are clearly non-trivial. The effect of CEO cycle on investment is also of the same order of magnitude as the effects of other factors known to influence investment such as the business cycle, political uncertainty, and financial constraints.\”

Their preferred explanation is that when CEOs are first appointed, they are under pressure from the board of directors to be the ruthless profit-seekers of legend. But over time, as the CEO appoints more members of the board of directors, the CEO feels less of this pressure and has a tendency to over-invest.

\”First, when a CEO takes office, he will have incentives to divest poorly performing assets that the previous CEO established and was unwilling to abandon. Second, for many reasons, CEOs usually prefer their firms to grow, potentially at the expense of shareholder value maximization. The board of directors is an important constraint on CEOs’ ability to deviate from the shareholders’ interest. However, as the CEO becomes more powerful in the firm over time, he will have more sway over his board and will be able to undertake investments that maximize his utility, potentially at the expense of value. Eventually, when the CEO steps down, the process is repeated by the next CEO. … We measure the CEO’s capture of the board by the fraction of the board that is appointed during his tenure, and find that the increasing CEO influence on the board over his tenure explains the positive relation between CEO tenure and investment. … In addition, we find that the quality of a firm’s investments, measured by the market reaction to acquisition announcements, decreases with CEO tenure and becomes negative during the later portion of his time in office. The deteriorating investment quality is also related to the CEO’s control of the board.\”

They also seek to evaluate some of the possible alternative explanations. But they find, for example that this pattern holds even when the turnover of the CEO is due to death, illness, or retirement–and thus it just that boards fire CEOs who aren\’t performing well. They also find this effect both for CEOs hired from inside and from outside the firm, and for firms where the industry is being hit by big positive or negative surprises.

To me, the analysis makes CEOs sound a bit like coaches of sports teams: they arrive to clean up the mistakes of the past regime, but over time many of them gradually drift into their own set of mistakes. It also suggests that firms should think seriously about the independence of their boards of directors and about rotating the CEO on a semi-regular basis. One suspects that the cozy relationships between CEOs and the directors they have appointed is not just manifested in the firm\’s investment choices, but may well show up in executive compensation and other firm decisions, too.

Worldwide Income Inequality: From Two Humps to One

To calculate a worldwide measure of income inequality, you need to work with data on the distribution of income for the population in every country–and for many countries, this data is mismatched and helter-skelter. You need to convert the income data for all countries into a common currency, like U.S. dollars.  You then add up all the people in the world who fall into each income category. To do comparisons over time, you need to find data for different countries over time, and then also adjust for inflation. Christoph Lakner and Branko Milanovic of the World Bank take on this task in \”Global Income Distribution From the Fall of the Berlin Wall to the Great Recession,\” published this month as Policy Research Working Paper 6719.

Here\’s how the global distribution of income has shifted over time. It used to be said back in the 1960s that the global distribution of income was bi-modal–that is, it had one hump representing the large number of people who were very low-income and then a smaller hump representing those in the high-income countries. In the blue line for the global distribution of income 1988, the remnants of that bimodal distribution are still visible. But over time, the highest point in the income distribution is shifting to the right, and by 2008, the world has moved fairly close to having a unimodal or one-hump distribution of income.

An obvious question here is to what extent these changes are about what has happened in China and in India, which after all combine to include about one-third of the world\’s population. Here\’s a figure showing the shifts over time, with the population of India and China  shaded separately. You can see that the shift in the shape of the global income distribution an largely be traced to India and China. In particular, you can see that the hump to China is pretty much centered over the hump for India in 1988 and 1993, but by 2008, the hump for China is now shifting to the right more than the hump for India, reflecting China\’s faster rate of economic growth.

One final thought: In interpreting these charts, it\’s important to remember that the horizontal axis measures income on a logarithmic graph. That is, instead of each horizontal distance representing the same absolute gain in income, it represents the same proportional gain income. Starting from the left, the horizontal distance from $50 to $100 is the same as the distance from $100 to $200, which is the same as the distance from $200 to $400, or if you look off to the right, the sqame as the distance from $10K to $20K. In other words, relatively small movements to the right on this graph represent large changes in absolute value of incomes, especially as you get to the center and far-right of the graph.

Note: Hat tip to Howard Schneider at the Washington Post Wonkblog, where I saw the Lakner-Milanovic working paper mentioned.

Freshwater and Saltwater Economists: A Creation Story

Back in the late 1970s, when I was first shaking hands with economics, the standard dividing line in macroeconomics was phrased as \”monetarists\” vs. \”Keynesians.\” But that distinction was already becoming obsolete. Robert Hall stakes a claim to rephrasing the main dividing line in macroeconomics back in 1976 a way that has largely stood the test of time, as between \”freshwater\” and \”saltwater\” macroeconomists. As Hall wrote at the time:

\”As a gross oversimplification, current thought can be divided into two schools. The fresh water view holds that fluctuations are largely attributable to supply shifts and that the government is essentially incapable of affecting the level of economic activity. The salt water view holds shifts in demand responsible for fluctuations and thinks government policies (at least monetary policy) is capable of affecting demand. Needless to say, individual contributors vary across a spectrum of salinity.\” 

In a footnote, Hall offers a few examples which will give a smile to academic economists, if no one else:

\”To take a few examples, [Thomas] Sargent corresponds to distilled water, [Robert] Lucas to Lake Michigan, [Martin] Feldstein to the Charles River above the dam, [Franco] Modigliani to the Charles below the dam, and [Arthur] Okun to the Salton Sea.\”

For those not up on their southern California geography, the Salton Sea is the largest lake in California. It is formed by the occasional long-ago overflow of the Colorado River, but it has no natural outlet–except for evaporation. Thus, as various kinds of salinity wash through the soil and into the Salton Sea, its salinity kept rising, making it saltier than the ocean.

As Hall points out, the old-style differentiation between monetarists and Keynesians was based on views about the effects of monetary and fiscal policy. Keynesians back in the 1950s typically believed that the supply of money and credit was not an important factor in determining the business cycle. Monetarists like Milton Friedman argued that it was. By the mid-1970s, the monetarists had won that argument and Keynesian thinking of that time often discussed both fiscal and monetary policies. As Hall wrote in 1976: \”The old division between monetarists and Keynesians is no longer relevant, as an important element of fresh-water doctrine is the proposition that monetary policy has no real effect. What used to be the standard monetarist view is now middle-of-the-road, and is widely represented, for example, in Cambridge, Massachusetts.\”

At a more detailed level, Hall attributed much of the difference between the freshwater and saltwater macroeconomists to their views on expectations. In the freshwater view of that time, it was typically argued that economic actors had excellent foresight about the future effects of various policies–what is often called \”rational expectations.\” In certain economic models with rational expectations, adjusting the money supply has no effect, because all economic actors can see what i happening and adjust all prices and wages accordingly. As Hall wrote in 1976: \”By now, everyone more than a few yards from the ocean\’s edge bows in the direction of rational expectations.\”

But how much rationality was really likely? As Hall drily noted, some of the models seemed to presume that all economic actors had rationality \”[e]qual to that of an MIT Ph.D. in economics with 9 years of professional experience.\” But even at that time, economists were experimenting perspectives on macroeconomic behavior. Some economists used information lags, in which people might take time to develop their rational expectations. Others thought about \”adaptive expectations,\” in which people looked backward at what had  happened, but didn\’t make forward-looking predictions in the way that true rational expectations would require. Still others looked at reasons why prices or wages might not adjust, with a particular focus on contracts or other kinds of \”sticky prices,\” which would later grow into a \”New Keynesian\” saltwater view of the economy.  As Hall wrote: \”Macroeconomists of more brackish persuasions are skeptical of the explanatory value of information lags, and have developed a major alternative within the framework of rational economic behavior. The basic idea is that buyers and sellers of labor services rationally enter into contracts that fix the wage in money terms for some time into the future.\”

There was a time, not all that long ago back in the mid-2000s, when a number of economists thought that they had successfully put together a consensus macroeconomic model. It built on the freshwater ideas that macroeconomic models should be built on microeconomic behavior and the important of expectations of individual agents, while still allowing for the possibility of saltwater ideas like the stickiness of prices, the economic losses from recessions, and a role for government policy in ameliorating recessions.  For one explanation of these efforts at building a consensus model, see the article by Jordi Galí and Mark Gertler in the Fall 2007 issue of the Journal of Economic Perspectives,  \”Macroeconomic Modeling for Monetary Policy Evaluation.\” Just to be clear, a shared model doesn\’t mean that macroeconomists would all agree. It means that economists with differing perspectives can use the same overall structure for analysis and argue about whether certain parameters have high or low values. This focuses the intellectual disputes in a useful way. But this consensus model, like pretty much all existing macroeconomic models, failed the test of providing a useful framework for understanding the Great Recession. The freshwater and saltwater camps separated again.

Here is one quibble with what Hall wrote back in 1976.  He argued: \”As I see it, the major distinguishing feature of macroeconomics is its concern with fluctuations in real output and unemployment. The two burning questions of macroeconomics are: Why does the economy undergo recessions and booms? What effect does conscious government policy have in offsetting these fluctuations?\” At the time when Hall was writing, this statement seemed accurate to me. The very idea of macroeconomics as a distinctive field grew out of that extraordinary recession called the Great Depression, and in the following decades up to the mid-1970s, the concern over economic fluctuations largely defined the field of macroeconomics.

But although this change wasn\’t yet visible in 1976 when Hall was writing, the U.S. economy had just entered a lengthy period of productivity slowdown. We were in the process of witnessing a period of enormous economic catch-up from Japan, soon to be followed by Korea and other nations of East Asia, and now in turn being echoed by rapid growth in China, India, and other emerging economies. Looking ahead, the U.S. economy faces challenges about whether it can return to and sustain a strong rate of growth into the future. In the aftermath of the Great Recession, many of the old arguments about causes of business cycles and policies for ameliorating them have obvious relevance.  But to me, macroeconomics should also be about the long-term patterns of economic growth.

Compression of Morbidity

Life expectancies are rising, but how healthy will people be in those additional years of life? The debate over \”compression of morbidity\” asks whether people who live longer will experience more years of illness, or the same number of years of illness, or even fewer year of illness. Of course, if people experience fewer years of illness before death, it would have enormous social effects, including lower spending on health care and long-term care services for the elderly, as well as a greater ability of the elderly to participate actively in their families, their communities, and the workforce. There\’s some evidence that compression of morbidity is in fact occurring.

David Cutler, Kaushik Ghosh, and Mary Beth Landrum report \”Evidence for Significant Compression of Morbidity in the Elderly U.S. Population\” in National Bureau of Economic Research  Working Paper #19268. (NBER working papers aren\’t freely available, although many readers will be able to access them through library subscriptions. However, a short readable summary of the paper is available here.) They use data from the Medicare Current Beneficiary Survey going back to 1991 and look at various measures of morbidity: certain diseases, whether the person reports limits on activities of daily living, and 19 measures of functioning that can be affected by health. They summarize their results like this:

\”Health status in the year or two just prior to death has been relatively constant over time; in contrast, health measured three or more years before death has improved measurably. … We show that disability-free life expectancy is increasing over time, while disabled life expectancy is falling. For a typical person aged 65, life expectancy increased by 0.7 years between 1992 and 2005. Disability-free life expectancy increased by 1.6 years; disabled life expectancy fell by 0.9 years. The reduction in disabled life expectancy and increase in disability-free life expectancy is true for both genders and for non-whites as well as whites. Hence, morbidity is being compressed into the period just before death.\”

Here\’s an illustration of one of their findings. The solid blue line shows the rate of Activities of Daily Living (ADLs) and Instrumental Activities of Daily Living (IADLs) for the entire group. ADLs include basic activities like feeding and dressing yourself, or going to the bathroom. IADLs include broader functioning activities like grocery shopping, housework, and phone calls. The top two lines show that the proportion of Medicare beneficiaries who turn out to be within a year or two of death have reporting these disabilities at pretty much the same rate over time. But Medicare beneficiaries who are further from death are reporting these disabilities at a lower rate. The same morbidity in the couple of years before death, but lower morbidity for those further from death, means that compression of morbidity is occuring.  

There\’s also some evidence that the rate of dementia or Alzheimer\’s disease in older age groups is declining in a number of countries. Eric B. Larson, Kristine Yaffe, M.D., and Kenneth M. Langa summarize this evidence in  \”New Insights into the Dementia Epidemic,\”  appearing in the December 12 issue of the New England Journal of Medicine. Of course, a falling rate of dementia means that the chance of getting Alzheimer\’s can be reduced by various factors. They write: \”But for now, the evidence supports the theory that better education and greater economic well-being enhance life expectancy and reduce the risk of late-life dementias in people who survive to old age. The results also suggest that controlling vascular and other risk factors during midlife and early old age has unexpected benefits. That is, individual risk-factor control may provide substantial public health benefits if it leads to lower rates of late-life dementias.\”
Here\’s a table summarizing some of the studies mentioned by Larson, Yaffe, and Langa: 
Of course, the ultimate ideal for compression of morbidity is to live in full and perfect health right up until the day of their death. My mental metaphor for such a happy event is the old poem by Oliver Wendell Holmes, \”The Wonderful One Hoss Shay,\” about a carriage that was built so well that it worked perfectly and marvelously for 100 years. The carriage had been built with no weak parts, so that it didn\’t break down one piece at a time; instead, after 100 years it suddenly disintegrated into dust. Here are the opening and closing stanzas:

Have you heard of the wonderful one-hoss-shay,
That was built in such a logical way
It ran a hundred years to a day,
And then, of a sudden, it–ah, but stay
I\’ll tell you what happened without delay …

What do you think the parson found,
When he got up and stared around?
The poor old chaise in a heap or mound,
As if it had been to the mill and ground!
You see, of course, if you \’re not a dunce,
How it went to pieces all at once,–
All at once, and nothing first,–
Just as bubbles do when they burst.
End of the wonderful one-hoss-shay.
Logic is logic. That\’s all I say.

Last summer I heard a talk on compression of morbidity (not by any of the authors above) and the speaker memorably said: \”We already have the magic pill that produces compression of morbidity. It\’s called exercise.\”