How Globalization Nearly Exterminated the Buffalo

M. Scott Taylor (no relation!) offers a new and persuasive explanation of why the American buffalo population declined from 10-15 million in the early 1870s to about 100 by the late 1880s in \”Buffalo Hunt: International Trade and the Virtual Extinction of the North American Bison.\” The article is in the December 2011 issue of the American Economic Review (which isn\’t freely available on-line, but many in academia will have on-line access to it through their library or a membership in the American Economic Association.)

Taylor summarizes his argument:  \”This paper examines the slaughter using theory, empirics, and first-person
accounts from diaries and other historical documents. It argues that the story of the buffalo slaughter is surprisingly not, solely, an American one. Instead, I argue that the slaughter was initiated by a tanning innovation created in Europe and maintained by a robust European demand for buffalo hides.\”

Of course, it\’s not a shock that the buffalo population declined as settlers spread across the western states in the mid-19th century. But the decline seemed to be happening gradually.

\”By 1830, buffalo were largely gone east of the Mississippi. During much of this early period natives hunted the buffalo not only for their own subsistence needs but also to trade buffalo robes at forts and towns. A buffalo robe is the thick and dark coat of a buffalo that is killed mid-winter. Robes could be used as throws for carriages, or cut to make buffalo coats and other fur items. They were a common item in the 19th century, and they made their way to eastern markets via transport along the Missouri river to St. Louis or overland via the Santa Fe trail. In the 1840s settlers pushed through the Great Plains into Oregon and California. The movement of the 49ers to California and the  Nevada gold rush years brought a steady stream of traffic through the Platte River  valley. Subsistence hunting along the trail plus the movement of cattle and supplies divided the existing buffalo herd into what became known as the Northern and Southern herds.

\”The division of herds became permanent with the building of the Union Pacific Railroad through the Platte River valley in the 1860s. While subsistence hunting for the railroad crews surely had some effect on buffalo numbers, as did the railroad’s popular day trips to kill buffalo, the harried buffalo herds withdrew from the tracks, creating a corridor centered on the Union Pacific line. The railroads also provided transportation for buffalo products to eastern and foreign markets, but in the 1860s railway cars were not refrigerated, and, hence, buffalo meat was marketed only as salted, cured, or smoked.\”

\”Despite the railroads, the market for buffalo robes, the increase in subsistence hunting, and the conversion of the high prairie to agriculture, most observers expected the population to decline gradually as it had east of the Mississippi. The force of habitat destruction was minimal on the Great Plains. In 1860, they held only 164,000 people. Farms occupied less than 1 percent of the land area.\”

This pattern of gradual decline for the buffalo changed abruptly not long after 1870, when tanners in England figured out a way to tan buffalo hides for leather:

\”The hardest evidence comes from a London Times article reporting from New York City in August of 1872. It reports that a few enterprising New Yorkers thought that buffalo hides might be tanned for leather, and when the hides arrived they were “sent to several of the more prominent tanners who experimented upon them in various ways, but they met with no success. Either from want of knowledge or a lack of proper materials, they were unable to render the hides soft or pliable, and therefore they were of no use to them.”

The report continues to note “several bales of these hides were sent to England, where they were readily taken up and orders were immediately sent to this country for 10,000 additional hides. These orders were fulfilled, and since then the trade has continued.” Further still, the methods are spelled out: “The hides are collected in the West by the agents of Eastern houses; they are simply dried, and then forwarded to either New York or Baltimore for export… The low price that these goods have reached on the English market, and the prospect of a still further decline, may in time put an end to this trade, but at present the hides are hunted for vigorously, and, if it continues, it will take but a few years to wipe the herds out of existence” (my emphasis). …

The market for buffalo hides boomed; buffalo hunters already in the field—like George “Hodoo” Brown—started to skin buffalo for their flint (hairless) hides, and hundreds if not thousands of others soon joined in the hunt. Previous to the innovation, hides taken from the Southern herd or hides taken in all but three winter months were virtually worthless as fur items. The only saleable commodity from a buffalo killed in these regions or times was its meat, but this market was severely limited by transportation costs. With the advent of a flint-hide market, killing a buffalo anywhere and anytime became a profitable venture. By 1872, a full-scale hide boom was in progress.\”

Taylor acknowledges and discusses the standard explanations for the decline of the buffalo: hunting by the United States Army, the presence of the railroads, and changes in native American hunting practices. While each of these may have contributed a bit to the decline, his estimates suggest that demand from the global market played a central role: \”[T]he newly constructed export data support the export-driven slaughter hypothesis, while the evidence for the alternative hypotheses that hold the railroads, the Army, or native Americans responsible is far weaker. The magnitudes of the implied export flows are considerable. My findings suggest approximately six million buffalo hides are exported over the 1871–1883 period, and this represents a buffalo kill of almost nine million.\”

Taylor boils down three crucial economic factors behind the buffalo slaughter: \”[A] combination of a tanning innovation, open access to buffalo herds, and fixed world prices delivers a punctuated slaughter matching that witnessed on the Great Plains…. The slaughter is not a unique example of resource overuse created by burgeoning demand and poor regulation. It may, however, be unique in its scale, its speed, and the critical role played by international markets.  … Although the bison slaughter was a major event in US history, it was a minor event on the world stage. And being small on world markets meant that some of the typical insulating and signaling properties provided by a market price system were missing.\”

In short, when smaller countries and economies around the world express concern that combinations of new technology and global demand might devastate their natural habitat or resources, Americans should be willing to listen. In our own history, it\’s what nearly exterminated the buffalo.

As a coda, the slaughter of the buffalo was one of the events leading to the creation of an American environmentalist movement: \”The slaughter of the North American buffalo surely represents one of the saddest chapters in American environmental history. To many Americans at the time, the slaughter seemed wasteful and wrong, as many newspaper editorials and letters to congressmen attest, but still, little was done to stop it. The destruction of the buffalo and the wanton slaughter of other big game across the West did, however, pay some dividend. The slaughter of the buffalo in particular was pivotal in the rise of the
conservation movement in the late nineteenth and early twentieth century. Almost all of the important players in the conservation movement experienced the slaughter firsthand—Teddy Roosevelt, John Muir, and William Hornaday. The creation of the national park system in general, and the Yellowstone herd in particular, reflect
the revulsion many felt to the Slaughter on the Plains.\”

Economic Underpinnings of the U.S. Revolutionary War

The U.S. war for independence from Britain is often described in terms a desire for democratic self-government and protection of the rights of citizens. But among historians, there has been a long tradition arguing that economic factors were more important. Perhaps most famously, Charles Beard argued back in 1913 in his book, An Economic Interpretation of the Constitution of the United States that the Founding Fathers were just out to protect their own personal property. But as the decades passed, a consensus developed that Beard\’s arguments was so simplistic and contentious and stark that it was just wrong.

Staughton Lynd and David Waldstreicher offer a refreshing take on the role of economic forces in the U.S. Revolution in \”Free Trade, Sovereignty, and Slavery: Toward and Economic Interpretation of American Independence.\” It appears in the October 2011 issue of the William and Mary Quarterly, which is not freely available on-line, but will be available to many with academic ties if their institution has a certain kind of JSTOR subscription. Here\’s their opening (footnotes omitted):

\”What kind of revolution was the American Revolution? Four basic answers, all first suggested before 1800, continue to shape the scholarship. They can be denoted Answers A, B, C, and D.

Answer A was advanced by the Revolution\’s leaders and echoed by their friends in Great Britain, such as Edmund Burke: The American Revolution was a struggle for constitutional rights.

Answer B was that of the Revolution\’s opponents, again both in the American colonies and in Great Britain: The American Revolution was a struggle for economic independence from the British Navigation Acts and other economic restrictions. 

Answers C an D were put forward in a second round of controversy during the 1790s, as Americans tried to determine their proper relationship to the French Revolution. Answer C was that of the Jeffersonians: The American Revolution was a democratic movement essentially similar to the French Revolution. Their Federalist opponents responded with Answer D: The American Revolution was a colonialist independence movement essentially different from the French Revolution.

We offer for further exploration what might be described as a B-D interpretation. That is, the American Revolution was basically a colonial independence movement and the reasons for it were fundamentally economic.\”

I can\’t hope to summarize their argument point-by-point, but in large part, it comes down to pointing out how economic conflicts were often prior in time other events of the Revolution,  and loomed large in importance. Britain often sought to tax or even to embargo various kinds of trade from the American colonies to the West Indies: for example, the import tax with the Molasses Act of 1733, or the way in which the British Navy tried to cut off trade between the colonies and the French West Indies during the Seven Years\’ War. The \”single most contentious issue\” in the First Continental Congress in 1774 was about the extent to which the British Parliament could regulate the U.S. economy, including these and other limits on navigation as well as acts that sought to prohibit manufacturing in the colonies (so that the colonies would need to import from Britain, instead). After reviewing the history in some detail, they write:

\”The commercial dispute preceded the constitutional, not just once but again and again in these years. It is important that colonists melded economic and constitutional arguments under the category of sovereignty–but not so important that we should ignore the originating nature of economic forces.\”

To anyone who has passed through the U.S. public school system, or who has listened to the rhetoric of politicians, thinking of U.S. independence as driven by economic motives has nearly sacrilegious overtones. Lynd and Waldstreicher respond like this:
 

\”If the American Revolution had fundamentally economic causes, it is not thereby demeaned. Post-World War II colonial independence movements should have taught us something about the many-sided meanings of economic sovereignty for developing nations. If not only merchants but also artisans, tenant farmers, cash-strapped yeoman, fishermen, and debt-ridden slave-owning planters can be shown to have had compelling economic reasons to favor independence, it should not seem too narrow or conspiratorial to suggest that they acted on these reasons and sought to combine them with a language that spoke to principles as well as to the bottom line.\”

To me, it seems quite plausible and believable that the urge of the colonists to move beyond protest and into actual war and rebellion needed the push of economic factors. In addition, if we have learned nothing else from last two centuries, it should be that when anyone starts talking about a revolution is needed for freedom and justice and for people to get their \”rights,\” warning sirens should go off in your brain. Such promises from revolutionaries have certainly been betrayed far more than they have been honored.

But it also seems to me that the cause of an event is often quite different than the lasting legacy of that same event. The causes of the U.S. Revolution probably were largely economic, albeit expressed in a language of constitutionalism. But the lasting legacy of American Independence was the creation of a constitutional structure that is flexible enough to adapt and strong enough to endure.

Lessons for Europe\’s Debt Crisis from Early U.S. History

For much of the last decade, all European governments that borrowed using the euro were viewed as equal credit risks: that is, they paid essentially the same interest rate when borrowing. For an American, the obvious parallel involves borrowing by state and local governments, who all borrow in the same currency of U.S. dollars but have different credit ratings and borrow at different interest rates. Not coincidentally, the U.S. federal government has a long tradition of not bailing out state or local governments in financial trouble, while there is clearly a widespread expectation that the European Union will somehow act to bail out Greece and others.

At a first glance, pointing out that the U.S. federal government doesn\’t bail out the state or local governments might seems to make the case that Europe should also avoid such bailouts.  But C. Randall Henning and Martin Kessler point out that the historical patterns and potential lessons are more nuanced in \”Fiscal Federalism: US History for Architects of Europe\’s Fiscal Union.\” It\’s available here as Working Paper 12-1 from the Peterson Institute for International Economics and also here as part of the Bruegel Essay and Lecture Series. They point out that in some ways, the centrality of the federal level of the U.S. system was created by assuming the debts of the states after the Revolutionary War. But around 1840, the federal government then ended this practice. Here is Henning and Kessler (footnotes and citations omitted):

\”The first secretary of the Treasury, Alexander Hamilton, is by all accounts credited with creating a “modern” financial system for the new United States. The magnitude of his achievements emerges from considering the prior condition of the US economy. Before 1790, the United States was effectively bankrupt, in default on most of its debt incurred during the Revolutionary War, and had no banking system, regularly functioning securities markets, or national currency. Reliant on the 13 states to collect and share tax revenue, the federal government was unable to pay war veterans or service, let alone redeem, debts. Under the Articles of Confederation, the federal government had no executive branch, judicial branch, or tax authority….\”

\”The debt assumption plan involved the transfer of state debt to the federal government in the amount of $25 million. Added to existing federal debt incurred to foreign governments (France) and domestic investors in the amount of $11.7 million and $42.1 million, respectively, federal debt would then amount to $79.1 million —a very large sum compared with nominal GDP in 1790 estimated at $187 million.\”

Hamilton\’s plan was controversial at the time–so controversial that by around 1790, was a real chance that the new country might break up. Was the plan constitutional? How to deal with the fact that some states had borrowed far more than others, but after the federal government assumed the debt, all states would now need to repay it? Hamilton was also restructuring the debt at about the same time. However, as Hamilton and others perceived, making the federal government central in this way could help bind the states together into a union.  In the end, the federal government did assume the debts of the states, did restructure them, and did pay them off. But would this pattern continue?  Henning and Kessler: 

\”[T]he debt assumption of 1790 set a precedent that endured for several decades. The federal government assumed the debt of states again after the War of 1812 and then for the District of Columbia in 1836. During this period, the possibility of a federal bailout of states was a reasonable expectation; moral hazard was substantially present. This pattern was broken in the 1840s, when eight states plus Florida, then a territory, defaulted.  … The indebted states petitioned Congress to assume their debts, citing the multiple precedents. British and Dutch creditors, who held 70 percent of the debt on which states later defaulted, pressed the federal government to cover the obligations of the states. They argued that the federal government’s guarantee, while not explicit, had been implied. Prices of the bonds of even financially sound states fell and the federal government was cut off from European financiers in 1842. …John Quincy Adams evidently believed that another war with Britain was likely if state debts were not assumed by the federal government.\”

What were the underlying reasons that caused the U.S. Congress to break the assumption that it would take over the debts of the states as needed?

\”However, on this occasion Congress rejected the assumption petition and was able to do so for several reasons. First, debt had been issued primarily to finance locally beneficial projects, rather than national public goods. Second, domestically held bonds were not a large part of the US banking portfolio, and default had limited contagion effects at least through this particular channel. Third, the financially sound states were more numerous than the deeply indebted ones. And, finally, the US economy had matured to the point where it was less dependent on foreign capital. Foreign loans were critical to Hamilton’s plan in 1790, but they were a minority contribution when investments eventually resumed in the 1850s.\”

\”Eventually, most states repaid all or most of their debt as a condition for returning to the markets. …The rejection of debt assumption established a “no bailout” norm on the part of the federal government. The norm is neither a “clause” in the US Constitution nor a provision of federal law. Nevertheless, whereas no bailout request had been denied by the federal government prior to 1840 , no such request has been granted since, with one special exception discussed below [the District of Columbia in the 1970s].

\”The fiscal sovereignty of states, the other side of the no-bailout coin, was thereby established. During the 1840s and 1850s, states adopted balanced budget amendments to their constitutions or other provisions in state law requiring balanced budgets. This was true even of financially sound states that had not defaulted and their adoption continued over the course of subsequent decades, so that eventually three-fourths of the states had adopted such restrictions.\”

Henning and Kessler suggest three lessons from U.S. history that Europeans should consider as they look at whether or how to assume some of the debts of countries like Greece.

\”First, debt brakes are likely to be more durable and effective when “owned” locally rather than mandated centrally.\”

The U.S. states didn\’t have a no-deficits rule imposed on them. They volunteered for such rules as part of wanting to borrow for infrastructure projects. U.S. states could drop their no-deficits rules at any time if they wanted. This is fundamentally a different situation than having the European Union or the European Central Bank try to imposed debt limits on recalcitrant countries.

\”\’Second, maintaining a capacity for countercyclical macroeconomic stabilization is essential. Balanced budget rules have been viable in the US states because the federal government has a broad set of fiscal powers, including countercyclical fiscal action.

When a recession  hits, U.S. states and their citizens often get some help from the federal government. With a common central bank and a common currency, many countries in the EU have already given up the paper to react to a recession within their borders by cutting interest rates or by depreciating their currency. If they also have debt limits imposed on them, they may be unable to react to a recession with fiscal policy, either. In the modern economy, arrangements that have the effect of preventing governments from reacting at all when their countries are in a recession are not likely to work well.

\”Finally, because debt brakes threaten to collide with bank rescues, the euro area should unify bank regulation and create a common fiscal pool for restructuring the banking system.\”

 The interaction between bank failures and government debt needs to be addressed. In some cases, like Ireland, bank failures were the main cause of government debt–when government offered guarantees that the banks would not go under. In other cases, like Greece, excessive government debt risks bringing a wave of bank failures, because Greek debt is so widely held by many large European banks. A unified and funded system of bank regulation across Europe would reduce both of these risks.

I don\’t have a trail map for how Europe should tiptoe through its current debt and financial crises. The middle of an economic crisis can be a poor time to try to implement the long-term arrangements, that if only they had been in place, would have reduced the risk of the crisis in the first place. But the U.S. model of not bailing out states does depend, in part, on the fact that states adopted their no-borrowing rules themselves, on a powerful federal fiscal authority, and on a unified and funded system of banking regulation. Without these conditions in place, Europe may have set itself up for a situation where intermittent bank bailouts and government debt bailouts are better than the even less-palatable alternatives.

Independence and Depression: Economics of the American Revolution

For at least half a century, economic historians looking at colonial America have started with 1840–when the U.S. census collected useful data about economic issues like occupations and industry–and then worked  backward. A common approach was to divide the 1840 economy into sectors, and then work backwards trying to make reasonable estimates about the number of workers in each sector and their productivity.

Peter Lindert and Jeffrey Williamson have been taking an alternative approach. They have been collecting  available archival data, like local censuses, tax lists, and occupational directories. They look for data on occupation or in some cases on social class, and then combine it with data on wages. They then extrapolate from documented localities within a region to similar undocumented localities within a region, and so on up to the national level. More broadly, instead of trying to estimate GDP from the production side of the economy, they try to estimate it from the income-earning side of the economy.  

A nice readable overview of their work is available in an essay published in July on VOX called \”America\’s Revolution: Economic disaster, development, and equality.\” Those who want to know more about how the sausage was made can look at their NBER working paper (#17211) from last July: \”American Incomes Before and After the Revolution.\” And those who want to see the actual uncooked meat inside the sausage can look at their open-source data website here. The effort is clearly a work in progress: at one point they refer to it as \”controlled conjectures\” and at another point as \”provocative initial results.\” Here are three of their findings:

During the Revolutionary War and in its aftermath, the U.S. economy contracted by Depression-level amounts. From 1774 up to about 1790, on their analysis, the U.S economy may have declined  by \”28% or even higher in per capita terms.\” They offer several plausible reasons for this decline: the destruction caused by the War itself;  the sharp decline in exports caused by the Revolutionary War, including the loss of more than half of all pre-war trade with England by 1791; and the departure of skilled and well-connected loyalists. Urbanization is typically a sign of economic development, but during this time period, the U.S. economy was de-urbanizing. They write: To identify the extent of the urban damage, one could start by noting that the combined share of Boston, New York City, Philadelphia, and Charleston in a growing national population shrank from 5.1% in 1774 to 2.7% in 1790, recovering only partially to 3.4% in 1800. There is even stronger evidence confirming an urban crisis. The share of white-collar employment was 12.7% in 1774, but it fell to 8% in 1800; the ratio of earnings per free worker in urban jobs relative to that of total free workers dropped from 3.4 to 1.5 …\” 

These economic losses seem to me an often-neglected part of the usual historical narrative of America\’s War for Independence. Those fighting for independence were sticking to their cause, even as the typical standard of living plummeted.

The American South was the region that suffered by far the most from the Revolutionary War. 
On their estimate, the New England region suffered only a modest loss in per capita GDP of -.08% per year from 1774 to 1800, and then grew at a robust annual rate of 2.1% from 1800 to 1840. The Middle Atlantic region suffered a larger annual decline in per capita GDP of 0.45% from 1774 to 1800, but bounced back with an annual growth rate in per capita  GDP of 1.45% from 1800 to 1840. However, the Southern region experienced a near-catastrophic drop of 1.57% per year in per capita GDP over the quarter-century from 1774-1800, and rebounded to a growth rate of just 0.43% from 1800 to 1840. On their numbers, the South is has by far the highest incomes of the three regions in 1774, and by far the lowest per capita GDP of the three regions by 1840. Indeed, on their estimated, real per capita GDP in the South in 1840 was about 20% below its level in 1774!

This absolute and relative decline of the South has been used as an example of how institutions can shape long-run economic development. The basic argument is that when the New World was settled, certain areas seemed well suited mining and plantation agriculture. Those areas ended up with what  Daron Acemoglu, Simon Johnson, James A. Robinson in a 2002 article referred to as \”extractive institutions, which concentrate power in the hands of a small elite and create a high risk of expropriation for the majority of the population, are likely to discourage investment and economic development. Extractive institutions, despite their adverse effects on aggregate performance, may emerge as equilibrium institutions because they increase the rents captured by the groups that hold political power.\” The alternative is areas where extractive economics won\’t work, and these areas instead receive a \”cluster of institutions ensuring secure property rights for a broad cross section of society, which we refer to as institutions of private property, are essential for investment incentives and successful economic performance.\” In their 2002 article in the Quarterly Journal of Economics, the authors apply this dynamic broadly across the settlement of the New World, and they title the article: \”Reversal of Fortune:  Geography and Institutions in the Making of the Modern World Income
Distribution.\” For a nice readable article laying out a similar theory, see \”Factor Endowments, and Paths of Development in the New World,\” by  Kenneth L. Sokoloff and Stanley L. Engerman, in the Summer 2000 issue of my own Journal of Economic Perspectives. (The JEP is publicly available, including the most recent issue and archives going back more than a decade, courtesy of the American Economic Association.)

I can\’t claim any expertise on the interaction of economic conditions and public mood in the years leading up the U.S. Civil War. But it does seem to me that seeing the U.S. South as a region where per capita GDP had for decades been struggling to recover from an enormous decline, while in relative terms falling ever farther behind other regions of the country, helps to deepen my understanding of the South\’s sense of separateness which fed into a willingness to secede.

Without the economic damage from the Revolutionary War, the U.S. economy might have started its period of more rapid economic growth several decades sooner–and perhaps been the first nation in the world to do so. Economic historians do love considering counterfactual possibilities, and this one strikes me as a good provocative one. Lindert and Williamson write: \”It seems clear that America joined Kuznets’s modern economic growth club sometime after 1790, with the North leading the way, while the South underwent a stunning reversal of fortune. And without the 1774-1790 economic disaster, it appears that America might well have recorded a modern economic growth performance even earlier, perhaps the first on the planet to do so.\”

Is the Great Depression is the Right Analogy for the Great Recession?

\”Economic History and Economic Policy,\” which is available at his website. He begins (footnotes omitted):

\”This has been a good crisis for economic history. It will not surprise most members of this audience to learn that there was a sharp spike in references in the press to the term \”Great Depression\” following the failure of Lehman Bros. in September of 2008. More interesting is that there was also a surge in references to \”economic history,\” first in February of 2008, with growing awareness that this could be the worst recession since you know when, and again in October, coincident with fears that the financial system was on the verge of collapse. Journalists, market participants, and policy makers all turned to history for guidance on how to react to this skein of otherwise unfathomable events.\”

Eichengreen discusses with care and detail whether analogies are chosen because they are the best example, or because they are a salient example almost within living memory, or because they deliver an already-selected policy conclusion. Drawing on a wide variety of political and economic examples, Eichengreen points out that since historical episodes never precisely match present events, often the most productive way to use history in making economic policy is not to use a single analogy, but instead to consider a number of somewhat relevant episodes, and to compare and contrast the events, policies, and outcomes.   He makes the provocative point that the choice of analogy has a tendency to guide policy responses. In the case of the analogy from the Great Depression to the Great Recession:

\”The analogy legitimated certain responses to the collapse of economic and financial activity while delegitimating others. It legitimized the notion that the Fed should respond aggressively to prevent the collapse of a few investment funds from precipitating a cascade of financial failures. This reflected the widespread currency of Friedman and Schwartz‘s interpretation of the Great Depression – that what had made the Depression great was the inadequate response of the Federal Reserve. … The analogy with the Great Depression informed the policy response to the crisis more generally. The Federal Deposit Insurance Corporation increased deposit insurance coverage to $250,000 per depositor exactly one day after press references to the Great Depression peaked. The action was presumably informed by the view of the banking panics of the Great Depression as runs by uninsured depositors, and the historical interpretation, widely shared, that those panics had played a key role in the contraction of the money supply and the impairment of the payments system. The analogy with the Great Depression similarly lent legitimacy to the argument that the Congress and Administration should respond with fiscal stimulus. This reflected the \”lesson\” of history that the depth and duration of the Depression were attributable in no small part to the fact that fiscal stimulus was not used to counter the collapse of private demand. …

The analogy with the Great Depression also delegitimized the temptation to respond with protectionist measures designed to bottle up the remaining demand. This reflected the lesson, widely taught to undergraduates and invoked by policy makers, that the Smoot-Hawley Tariff aggravated the crisis of the 1930s. In fact, this \”lesson\” of history is not supported by modern research, which concludes that Smoot Hawley played at most a minor role in the propagation of the Depression.\”

Eichengreen points out that these policy lessons are not the only possible lessons from the Great Depression, and that choosing other historical episodes might have emphasized other lessons. 

\”Did we need a new Neal Deal? Well, that depended on whether you sided with historians who argue that the New Deal helped to end the Depression or only prolonged it. Did we need a jolt to the exchange rate to vanquish deflationary expectations? The answer depended on whether your view was that Roosevelt‘s decision to take the U.S. off the gold standard in 1933 was the critical decision that transformed expectations and ended deflation or whether you thought it was a sideshow. For those attempting to move from metaphor to analogy, this was a reminder that the distilled, authoritative incapsulation of the period remains a work in progress.
Although the Great Depression was clearly the dominant base case in discussions of the 2008-9 crisis, there were other possible analogies. There was the 1873 crisis, driven by an investment boom and bust like that of the period leading up to 2007, which led to the failure of brokerage houses, in parallel with the problems in 2008 of the investment banks. There was the 1907 crisis, in response to which J.P. Morgan organized a lifeboat operation that resembled in important respects the 2008 rescue of Bear Stearns by none other than JP Morgan & Co.\”

Eichengreen also makes the point that the connection from past to present also works in reverse: for example, current economic events will alter our historical understanding of the policy reactions to the Great Depression.

\”The mainstream narrative is that the experience of the Depression led to a series of institutional and policy innovations making it less likely that something similar thing would happen again. American economic historians refer in this connection to federal deposit insurance, unemployment insurance, Social Security, the Securities and Exchange Commission, the concentration of monetary-policy-making authority at the Federal Reserve Board, and automatic fiscal stabilizers. Historians of other countries have similar list. Although the stabilizing impact of particular entries on these lists has been disputed, the thrust of the dominant narrative is clear.

We now have had a graphic reminder that we have less than fully succeeded in corralling threats to economic and financial instability. While policy responses may avoid the repetition of past threats, they are no guarantee against future threats. Markets tend to adapt to stabilizing policy innovations in ways that render those innovations less stabilizing. As memories of the earlier crisis fade, policy makers themselves become more likely to consort with market participants in this effort. I suspect that we will now see more attention to these longer-term adaptations to the legacy of the Great Depression and less to the short-term policy response.\”

2010 Years of economic output and population in one chart

The Economist has an elegant picture, describing \”Two thousand Years in one chart.\”

Over the last 2010 years, 55% of total economic output happened in the 20th century, and an additional 23% of the total in just the first 10 years of the 21st century.

About 28% of the total years of human life lived in the last 2010 years happened during the 20th century, and about 6% of total years of human life lived in the last 2010 years happened in the first 10 years of the 21st century.