Natural Disasters: Insurance Costs vs. Deaths

The natural disasters that cause the highest levels of insurance losses are only rarely the same as the natural disasters that cause the greatest loss of life. Why should that be? Shouldn\’t a bigger disaster affect both property and lives? The economics of natural disasters (and yes, there is such a subject) offers and answer. But first, here are two lists from the Sigma report recently published by Swiss Re (No 2, 2015).

The first list shows the 40 disasters that caused the highest insurance losses from 1970 to 2014 (where the size of losses has been adjusted for inflation and converted into 2014 US dollars). The top four items on the list are: Hurricane Katrina that hit the New Orleans area in 2005 (by far the largest in terms of insurance losses), the 2011 Japanese earthquake and tsunami; Hurricane Sandy that hit the New York City area in 2012; and Hurricane Andrew that blasted Florida in 1992. The fifth item is the only disaster on the list that wasn\’t natural: the terrorist attacks of September 11, 2001.

Now consider a list of the top 40 disasters over the same time period from 1970 to 2014, but this time they are ranked by the number of dead and missing victims. The top five on this list are the Bangladesh storm and flood of 1970 (300,000 dead and missing); China\’s 1976 earthquake (255,000 dead and missing),  Haiti\’s 2010 earthquake (222,570 dead and missing), the 2004 earthquake and tsunami that hit Indonesia and Thailand (220,000 dead and missing), and the 2008 tropical cyclone Nargis that hit the area around Myanmar (138,300 dead and missing). Only two disasters make the top 40 on both lists: the 2011 Japanese earthquake and tsunami, and Japan\’s Great Hanshin earthquake of 1995.

The reason why there is so little overlap between the two lists is of course clear enough: the effects of a given natural disaster on people and property will depend to a substantial extent on what happens before and after the event. Are most of the people living in structures that comply with an appropriate building code? Have civil engineers thought about issues like flood protection? Is there an early warning system so that people have as much advance warning of the disaster as possible? How resilient is the infrastucture for electricity, communications, and transportation in the face of the disaster? Was there an advance plan before the disaster on how support services would be mobilized?

In countries with high levels of per capita income, many of these investments are already in place, and so natural disasters have the highest costs in terms of property, but relatively lower costs in terms of life. In countries with low levels of per capita income, these investments in health and safety are often not in place, and much of the property that is in place is uninsured. Thus, a 7.0 earthquake hits Haiti in 2010, and 225,000 die. A 9.0 earthquake/tsunami combination hits Japan in 2011–and remember, earthquakes are measured on a base-10 exponential scale, so a 9.0 earthquake has 100 times the shaking power of a 7.0 quake–and less than one-tenth as many people die as in Haiti.

Natural disasters will never go away, but with well-chosen advance planning, their costs to life and property can be dramatically reduced, even (or perhaps especially) in low-income countries. For an overview of some economy thinking in this area, a starting point is my post on \”Economics and Natural Disasters,\” published November 2, 2012, in the aftermath of Hurricane Sandy.

How Milton Friedman Helped Invent Income Tax Withholding

[In commemoration of US federal income taxes being due today, April 15, here\’s a repeat of a post originally published April 12, 2014, about the beginnings of the practice of having income taxes withheld in advance from your paycheck.]

In one of the great ironies, the great economist Milton Friedman–known for his pro-market, limited government views–helped to invent government withholding of income tax. It happened early in his career, when he was working for the U.S. government during World War II. Of course, the IRS opposed the idea at the time as impractical. Friedman summarized the story in a 1995 interview with Brian Doherty published in Reason magazine. Here it is:

\”I was an employee at the Treasury Department. We were in a wartime situation. How do you raise the enormous amount of taxes you need for wartime? We were all in favor of cutting inflation. I wasn\’t as sophisticated about how to do it then as I would be now, but there\’s no doubt that one of the ways to avoid inflation was to finance as large a fraction of current spending with tax money as possible.

In World War I, a very small fraction of the total war expenditure was financed by taxes, so we had a doubling of prices during the war and after the war. At the outbreak of World War II, the Treasury was determined not to make the same mistake again.

You could not do that during wartime or peacetime without withholding. And so people at the Treasury tax research department, where I was working, investigated various methods of withholding. I was one of the small technical group that worked on developing it.

One of the major opponents of the idea was the IRS. Because every organization knows that the only way you can do anything is the way they\’ve always been doing it. This was something new, and they kept telling us how impossible it was. It was a very interesting and very challenging intellectual task. I played a significant role, no question about it, in introducing withholding. I think it\’s a great mistake for peacetime, but in 1941-43, all of us were concentrating on the war.

I have no apologies for it, but I really wish we hadn\’t found it necessary and I wish there were some way of abolishing withholding now.\”

Tax Expenditures

[In commemoration of US federal income taxes being due today, April 15, here is an updated version of a post that appeared on April 15, 2013, with the table and numbers updated to reflect the information in the proposed federal budget released by the White House Office of Management and Budget in February of this year.] 

In the lingo of government budgets, a \”tax expenditure\” is a provision of the tax code that looks like government spending: that is, it takes tax money that the government would otherwise have collected and directs it toward some social priority. Each year, the Analytical Perspectives volume that is published with the president\’s proposed budget has a chapter on tax expenditures.

Here\’s a list of the most expensive tax expenditures, although you probably need to expand the picture to read it. The provisions are ranked by the amount that they will reduce government revenues over the next five years. It includes all provisions that are projected to reduce tax revenue by at least $10 billion in 2015.

Here are some reactions:

1) The monetary amounts here are large. Any analysis of tax expenditures is always sprinkled with warnings that you can\’t just add up the revenue costs, because a number of these provisions interact with each other in different ways. With that warning duly noted, I\’ll just point out that the list of items here would add up to about $1 trillion in 2014.

2) It is not a coincidence that certain areas of the economy that get enormous tax expenditures have also been trouble spots. For example, surely one reason that health costs have risen so far, so fast, relates to the top item on this list, the fact that employer contributions to health insurance and medical care costs are not taxed as income. If they were taxed as income, and the government collected an additional $206 billion in revenue, my guess is that such plans would be less lucrative. Similarly, one of the reasons that Americans spend so much on housing is the second item on the list, that mortgage interest on owner-occupied housing is deductible from taxes. Without this deductibility, the housing price bubble of the mid-2000s would have been less likely to inflate. Just for the record, I have nothing personal against either health care or homeownership! Indeed, it\’s easy to come up with plausible justifications for many of the items on this list. But when activities get special tax treatment, there are consequences.

3) Most of these tax expenditure provisions have their greatest effect for those with higher levels of income. For example, those with lower income levels who don\’t itemize deductions on their taxes get no benefit from the deductibility of mortgage interest or charitable contributions or state and local taxes. Those who live in more expensive houses, and occupy higher income tax brackets, get more benefit from the deductibility of mortgage interest. Those in higher tax brackets also get more benefit when employer-paid health and pension benefits are not counted as income.

4) These tax expenditures offer one possible mechanism to ease America\’s budget and economic woes, as I have argued on this blog before (for example, here and here). Cut a deal to scale back on tax expenditures. Use the funds raised for some combination of lower marginal tax rates and deficit reduction. Such a deal could be beneficial for addressing the budget deficit, encouraging economic growth, raising tax revenues collected from those with high income levels, and reducing tax-induced distortions in the economy You may say I\’m a dreamer, but I\’m not the only one. After all, a bipartisan deal to broaden the tax base and cut marginal rates was passed in 1986, when the president and the Senate were led by one party while the House of Representatives was led by the other party.

When Government Pre-Fills Income Tax Returns

[In commemoration of US federal income taxes being due today, April 15, here\’s a repeat of a post from April 15, 2014, on the subject of government filling out your taxes for you.]

As Americans hit that annual April 15 deadline for filing income tax returns, they may wish to contemplate how it\’s done in Denmark. Since 2008, in Denmark the government sends you a tax assessment notice: that is, either the refund you can receive or the amount you owe. It includes an on-line link to a website where you can look to see how the government calculated your taxes. If the underlying information about your financial situation is incorrect, you remain responsible for correcting it. But if you are OK with the calculation, as about 80% of Danish taxpayers are, you send a confirmation note, and either send off a check or wait to receive one.

This is called a \”pre-filled\” tax return. As discussed in OECD report Tax Administration 2013: Comparative Information on OECD and Other Advanced and Emerging Economies: \”One of the more significant developments in tax return process design and the use of technology by revenue bodies over the last decade or so concerns the emergence of systems of pre-filled tax returns for the PIT [personal income tax].\” After all, most high-income governments already have data from employers on wages paid and taxes withheld, as well as data from financial institutions on interest paid. For a considerable number of taxpayers, that\’s pretty much all the third-party information that\’s needed to calculate their taxes. The OECD reports:

\”Seven revenue bodies (i.e. Chile, Denmark, Finland, Malta, New Zealand, Norway, and Sweden) provide a capability that is able to generate at year-end a fully completed tax return (or its equivalent) in electronic and/or paper form for the majority of taxpayers required to file tax returns while three bodies (i.e. Singapore, South Africa, Spain, and Turkey) achieved this outcome in 2011 for between 30-50% of their personal taxpayers. [And yes, I count four countries in this category, not three, but so it goes.] In addition to the countries mentioned, substantial use of pre-filling to partially complete tax returns was reported by seven other revenue bodies — Australia, Estonia, France, Hong Kong, Iceland, Italy, Lithuania, and Portugal. [And yes, I count eight countries in this category, not seven, but so it goes.] Overall, almost half of surveyed revenue bodies reported some use of prefilling …\”

For the United States, the OECD report notes that in 2011, zero percent of returns were pre-filled. Could pre-filling work in the U.S.? Austan Goolsbee provided a detailed proposal for how prefilling might work for the United States in a July 2006 paper, \”The Simple Return: Reducing America\’s Tax Burden Through Return-Free Filing.\” He wrote:

\”Around two-thirds of taxpayers take only the standard deduction and do not itemize. Frequently, all of their income is solely from wages from one employer and interest income from one bank. For almost all of these people, the IRS already receives information about each of their sources of income directly from their employers and banks. The IRS then asks these same people to spend time gathering documents and filling out tax forms, or to spend money paying tax preparers to do it. In essence, these taxpayers are just copying into a tax return information that the IRS already receives independently. The Simple Return would have the IRS take the information about income directly from the employers and banks and, if the person\’s tax status were simple enough, send that taxpayer a return prefilled with the information. The program would be voluntary. Anyone who preferred to fill out his own tax form, or to pay a tax preparer to do it, would just throw the Simple Return away and file his taxes the way he does now. For the millions of taxpayers who could use the Simple Return, however, filing a tax return would entail nothing more than checking the numbers, signing the return, and then either sending a check or getting a refund. … The Simple Return might apply to as many as 40 percent of Americans, for whom it could save up to 225 million hours of time and more than $2 billion a year in tax preparation fees. Converting the time savings into a monetary value by multiplying the hours saved by the wage rates of typical taxpayers, the Simple Return system would be the equivalent of reducing the tax burden for this group by about $44 billion over ten years.\”

Most of this benefit would flow to those with lower income levels. The IRS would save money, too, from not having to deal with as many incomplete, erroneous, or nonexistent forms.

For the U.S., the main practical difficulty that prevents a move to pre-filling is that with present arrangements, the IRS doesn\’t get the information about wages and interest payments from the previous year quickly enough to prefill income tax forms, send them out, and get answers back from people by the traditional April 15 timeline. The 2013 report of the National Taxpayer Advocate has some discussion related to these issues in Section 5 of Volume 2. The report does not recommend that the IRS develop pre-filled returns. But it does advocate the expansion of \”upfront matching,\” which means that the IRS should develop a capability to tell taxpayers in advance, before they file their return, about what their parties are reporting to the IRS about wages, interest, and even matters like mortgage interest or state and local taxes paid. If taxpayers could use this information when filling out their taxes in the first place, then at a minimum, the number of errors in tax returns could be substantially reduced. And for those with the simplest kinds of tax returns, the cost and paperwork burden of doing their taxes could be substantially reduced.

Moore\’s Law at 50

So many important aspects of the US and world economy turn on developments in information and communications technology and their effects These technologies were driving productivity growth, but will they keep doing so? These technologies have been one factor creating the rising inequality of incomes, as many middle-managers and clerical workers found themselves displaced by information technology, while a number of high-end workers found that these technologies magnified their output. Many other technological changes–like the smartphone, medical imaging technologies, decoding the human gene, or various developments in nanotechnology–are only possible based on a high volume of cheap computing power. Information technology is part of what has made the financial sector larger, as the technologies have been used for managing (and mismanaging) risks and returns in ways barely dreamed of before. The trends toward globalization and outsourcing have gotten a large boost because information technology made it easier

In turn, the driving force behind information and communications technology has been Moore\’s law, which can understood as the proposition that the number of components packed on to a computer chip would double every two years, implying a sharp fall in the costs and rise in the capabilities of information technology. But the capability of making transistors ever-smaller, at least with current technology, is beginning to run into physical limits.  IEEE Spectrum has published a \”Special Report: 50 Years of Moore\’s Law,\” with a selection of a dozen short articles looking back at Moore\’s original formulation of the law, how it has developed over time, and prospects for the law continuing. Here are some highlights.

It\’s very hard to get an intuitive sense of the exponential power of Moore\’s law, but Dan Hutcheson takes a shot at it with few well-chosen sentences and a figure. He writes:

In 2014, semiconductor production facilities made some 250 billion billion (250 x 1018) transistors. This was, literally, production on an astronomical scale. Every second of that year, on average, 8 trillion transistors were produced. That figure is about 25 times the number of stars in the Milky Way and some 75 times the number of galaxies in the known universe. The rate of growth has also been extraordinary. More transistors were made in 2014 than in all the years prior to 2011.

 Here\’s a figure from Hutcheson showing the trends of semiconductor output and price over time. Notice that both axes are measured as logarithmic scales: that is, they rise by powers of 10. The price of a transistor was more than a dollar back in the 1950s, and now it\’s a billionth of a penny.

graph showing transistors by the numbers

As the engineering project of making the components on a computer chip smaller and smaller is beginning to get near some physical limits. What might happen next?

Chris Mack makes the case that Moore\’s law is is not a fact of nature; instead, it\’s the result of competition among chip-makers, who viewed it as the baseline for their technological progress, and thus set their budgets for R&D and investment according to keeping up this pace. He argues that as technological constraints begin to bind, the next step will be for combining capabilities on a chip.

I would argue that nothing about Moore’s Law was inevitable. Instead, it’s a testament to hard work, human ingenuity, and the incentives of a free market. Moore’s prediction may have started out as a fairly simple observation of a young industry. But over time it became an expectation and self-fulfilling prophecy—an ongoing act of creation by engineers and companies that saw the benefits of Moore’s Law and did their best to keep it going, or else risk falling behind the competition. … 

Going forward, innovations in semiconductors will continue, but they won’t systematically lower transistor costs. Instead, progress will be defined by new forms of integration: gathering together disparate capabilities on a single chip to lower the system cost. This might sound a lot like the Moore’s Law 1.0 era, but in this case, we’re not looking at combining different pieces of logic into one, bigger chip. Rather, we’re talking about uniting the non-logic functions that have historically stayed separate from our silicon chips.

An early example of this is the modern cellphone camera, which incorporates an image sensor directly onto a digital signal processor using large vertical lines of copper wiring called through-silicon vias. But other examples will follow. Chip designers have just begun exploring how to integrate microelectromechanical systems, which can be used to make tiny accelerometers, gyroscopes, and even relay logic. The same goes for microfluidic sensors, which can be used to perform biological assays and environmental tests.

Andrew Huang makes the intriguing claim that a slowdown in Moore\’s law might be useful for other sources of productivity growth. He argues that when the power of information technology is increasing so quickly, there is an understandably heavy focus on adapting to these rapid gains. But if gains in raw information processing slow down, there would be room for more focus on making the devices that use information technology cheaper to produce, easier to use, and cost-effective in many ways.

Jonathan Koomey and Samuel Naffziger point out that computing power has become so cheap that we often aren\’t using what we\’ve got–which suggests the possibility of efficiency gains in energy use and computer utilization:

Today, most computers run at peak output only a small fraction of the time (a couple of exceptions being high-performance supercomputers and Bitcoin miners). Mobile devices such as smartphones and notebook computers generally operate at their computational peak less than 1 percent of the time based on common industry measurements. Enterprise data servers spend less than 10 percent of the year operating at their peak. Even computers used to provide cloud-based Internet services operate at full blast less than half the time.

Final note: I\’ve written about Moore\’s law a couple of times previously this blog, including \”Checkerboard Puzzle, Moore\’s Law, and Growth Prospects\” (February 4, 2013) and   \”Moore\’s Law: At Least a Little While Longer\” (February 18, 2014).  These posts tend to emphasize that Moore\’s law may still be good for a few more doublings. But at that point, the course of technological progress in information technology, for better or worse, will take some new turns.

Documenting the Investment Slowdown

There\’s a bubbling controversy over the \”secular stagnation\” hypothesis that investment levels are not only low, but likely to remain low. I\’ve posted some thoughts on the controversy here and there:

I\’m sure I\’ll return to the disputes over secular stagnation before too long, but for now, I just want to lay out some of the evidence documenting the investment slowdown. Such a slowdown is troublesome both for short-term reasons, because demand for investment spending is part of what should be driving a growing economy forward in the short-run, and also for long-term reasons, because investment helps to build productivity growth for increasing the standard of living in the future. The IMF asks \”Private Investment: What\’s the Hold-Up?\” in Chapter 4 of the World Economic Outlook report published in April 2015. Here\’s a summary of some of the IMF conclusions: 

  • The sharp contraction in private investment during the crisis, and the subsequent weak recovery, have primarily been a phenomenon of the advanced economies. For these economies, private investment has declined by an average of 25 percent since the crisis compared with precrisis forecasts, and there has been little recovery. In contrast, private investment in emerging market and developing economies has gradually slowed in recent years, following a boom in the early to mid-2000s.
  • The investment slump in the advanced economies has been broad based. Though the contraction has been sharpest in the private residential (housing) sector, nonresidential (business) investment—which is a much larger share of total investment—accounts for the bulk (more than two-thirds) of the slump. …
  •  The overall weakness in economic activity since the crisis appears to be the primary restraint on business investment in the advanced economies. In surveys, businesses often cite low demand as the dominant factor. Historical precedent indicates that business investment has deviated little, if at all, from what could be expected given the weakness in economic activity in recent years. … Although the proximate cause of lower firm investment appears to be weak economic activity, this itself is due to many factors. …
  • Beyond weak economic activity, there is some evidence that financial constraints and policy uncertainty play an independent role in retarding investment in some economies, including euro area economies with high borrowing spreads during the 2010–11 sovereign debt crisis. Additional evidence comes from the chapter’s firm-level analysis. In particular, firms in sectors that rely more on external funds, such as pharmaceuticals, have seen a larger fall in investment than other firms since the crisis. This finding is consistent with the view that a weak financial system and weak firm balance sheets have constrained investment. 
I\’ll just add a couple of figures that caught my eye. Here\’s the breakdown on how much investment levels have fallen below previous trend in the last six years across advanced economies. The blue bars show the decline relative to forecasts made in spring 2004; the red dot shows the decline relative to forecasts made in spring 2007. These don\’t differ by much, which tell you that that the spring 2004 forecasts of investment were looking fairly good up through 2007. The dropoff in investment for the US economy is in the middle of the pack.
It\’s also interesting to note that the cost of equipment has been falling over time, driven in substantial part by the fact that a lot of business equipment involves a large does of computing power, and the costs of computing power have been falling over time. Indeed, one of the arguments related to secular stagnation is that the decline in investment spending might in part be driven by the fact that investment equipment is getting cheaper over time, so firms don\’t need to buy as much of it. An alternative view might hold that as the price of business equipment falls, then firms should be eager to purchase more of it. Again, I won\’t dig into those arguments here, but the pattern itself is food for thought. 

Camera Sales — When Smartphones are Included

Here\’s a figure showing annual sales of cameras over time, with smartphones included. The gray bars are analog cameras (CIPA stands for the Camera & Imaging Products association, which collects this data). Compact digital cameras are the blue bars. Smaller categories of digital cameras include D-SLR, which stands for \”digital single-lens reflex\” camera, and mirrorless, which are cameras with interchangeable lenses.

Specialists still need specialized cameras, but for basic personal and business use, the camera as a separate tool is dying. I suspect that extremely cheap and easy imaging, along with technology that can recognize and \”read\” those images, will change the way we manage our personal memories, our sharing of experiences with others, our record-keeping, and all the paperwork of society in dramatic and often unexpected fashions.

Homage: This figure appears in a blog post by Michael Zhang at the PetaPixel website. He credits a photographer named Sven Skafisk for collecting the data on smartphone sales and adding it to a previously existing figure. I learned about the post from a link at the Instapundit website.

Snapshot of Trade Imbalances: The Rise of Germany

In 2013, China\’s economy was much bigger than Germany\’s, but Germany\’s trade surplus–in absolute size–was larger than China\’s.  That year, China\’s GDP was $9.2 trillion, about 2½ times as large as Germany\’s GDP of $3.7 trillion. But Germany\’s trade surplus was 7.8% of GDP, compared with a trade surplus of 1.9% of GDP for China. As a result, Germany\’s trade surplus was $274 billion compared with China\’s trade surplus of $183 billion.

As this example illustrates, some of what you may think you know about the patterns of trade surpluses and deficits has been changing during the last few years, so here are a few snapshots for bringing yourself up to date. Here\’s an edited version (I left out a couple of columns) of a table from Chapter 4 of the the IMF\’s World Economic Outlook Report for October 2014.

Here are some of the changes that jump out:

1) The U.S. economy still has by far the world\’s biggest trade deficits in absolute size, but by 2013 they had dropped dramatically compared to 2006.

2) Back in 2006, the huge trade deficits as a share of GDP in Spain, Greece, and Portugal, were all signs of the economic troubles to come in the euro-zone: that is, their imports were exceeding exports by such a huge margin–with the extra imports being financed by inflows of foreign capital–that a hard landing was very likely. By 2013, on the other hand, Spain, Greece, and Portugal no longer appear among the 10 largest trade deficits.

3) In 20016, the three biggest trade surplus economies by a considerable margin were China, Germany, and Japan. China\’s trade surplus fell sharply as a share of GDP, although because of China\’s continued rapid economic growth, it stayed large in absolute size. Japan\’s long-standing trade surplus actually disappeared back in 2011, and since then Japan\’s economy has been running trade deficits. However, Germany\’s trade surplus rose substantially from 2006 to 2013, both as a share of GDP and in absolute size.

4) Many of the big trade surplus countries in both years are substantial oil exporters: Saudi Arabia, Russia, Netherlands, Kuwait, Qatar, and UAE.  Back in 2013, the price of oil was still high.

Let\’s take a look at four of the world\’s major players in international trade: the US, China, Germany, and Japan. Here\’s a figure showing their trade imbalances (as measured by the current account balance) since 1990.

Again, a few observations:

1) The persistent and large US trade deficits (blue line) during this time won\’t surprise anyone who has been paying attention, but there is a recent drop in the size of these trade deficits.

2) China (purple line) actually had fairly modest trade surpluses through the 1990s. They took off in the mid-2000s, and now are back in the modest range. My belief, explained at greater length here, is that when China joined the World Trade Organization in 2001 the resulting surge of exports surprised everyone–even those in China. For several years now, China\’s government has had a publicly announced goal of lower trade surpluses, which has in fact been happening.

3) Japan (green line) ran large trade surpluses during its years of rapid economic growth in the 1960s and 1970s, and it also ran sizeable trade surpluses during most of its period of economic stagnation in the last quarter-century. This fact helps to illustrate that trade surpluses are surely no guarantee of economic health, and can in fact be a sign of economic weakness.

4) Germany (red line) stands out as the country where trade surpluses have been on the rise since the euro came into widespread use in the early 2000s.

This post isn\’t the place to rehearse the economics of trade imbalances (for starting points, see here and here). For now, suffice it to say that there is no reason why every country in the world should hope to have a trade balance of zero every year, but problems can arise when very large surpluses or deficits are a signal of economic weakness. For example, in the eurozone and in Germany (as in Japan for much of the period since the early 1990s), large trade surpluses are co-existing with slow or no increase in domestic demand. Here\’s how the US Treasury phrased the situation in its Report to Congress on International Economic and Exchange Rate Policies on October 14, 2014.

The euro area\’s overall current account, which was close to balance in 2009-2011, shifted to a surplus of 2.4 percent of GDP in 2013. The euro area’s surplus averaged about 2.3 percent of GDP in the first half of 2014. Germany’s current account surplus was 7.1 percent of GDP in the first half of 2014, up marginally from the second half 2013 surplus of 6.8 percent …  High surpluses in the euro area have persisted amid exceptionally weak domestic demand growth. Real domestic demand growth was positive in only two quarters in the past three years in the euro area, rendering the region reliant on demand emanating from outside of Europe for economic growth.  …

Although there has been a rebound in exports in some European peripheral countries, the adjustment process within the euro area would be facilitated if countries with large and persistent surpluses took stronger action to boost domestic demand growth. For example, in Germany, domestic demand has been persistently subdued. German domestic demand growth picked up in the first quarter of 2014, but it flat-lined in the second quarter, leaving domestic demand just 0.9 percent larger in the first half of 2014 than in the second half of 2013. Weakness in investment has been particularly notable, with gross investment contributing negatively to growth in 2012 remaining effectively flat in 2013. … In 2013, the European Union’s (EU) annual Macroeconomic Imbalances Procedure, developed as part of the EU’s increased focus on surveillance, identified Germany’s current account surplus as an imbalance that requires monitoring and policy action. Notably, the EU stated that, given the size of the German economy, action was particularly important to reduce the risk of adverse effects on the functioning of the euro area.

A few year\’s back, the major risk of global imbalances was the enormous US trade deficit and the enormous Chinese trade surplus. Both of those risks have ebbed. But the risks posed by Germany\’s enormous and growing trade surplus remain.

Better Batteries?

Back in the 1980s, the New Republic magazine ran a \”most boring headline\” contest. The winner was \”Worthwhile Canadian Initiative.\” The title of this post, \”Better Batteries,\” might not have won that contest, but surely it could have placed in the top ten. However, improvements in battery technology matter immensely. Information and technology runs on electricity, and the capabilities of that technology are in many circumstances determined by whether a device can be disconnected from an electrical cord–at least for at time. Advances in consumer electronics (smartphones, tablets, computers, games), industrial electronics (robots), electric cars, and even making more widespread use of intermittent power sources like wind and solar  all depend in various ways on the cost and effectiveness of rechargeable batteries.

This concern comes up regularly in news stories. For example, here\’s a story from C/NET by Ian Sherr and Shara Tibken last December 2, 2014, called \”It\’s 2014. Why is my battery stuck in the \’90s?\” The subhead reads \”The devices we all rely on continue to evolve radically. So why has the battery industry failed? Here\’s how you can take charge.\”  They write:

\”A new smartwatch has more computing power than the Apollo moon landing spacecraft. Batteries are a different story. Even though consumer electronics makers, from Apple to Samsung, pour millions of research dollars into eking out more battery life for devices, the technology isn\’t expected to advance much in the next few years. … Why battery tech has stagnated is a topic of debate among researchers, many of whom claim we\’re reaching the limits of what science can muster.\” 

Similarly, an article by Christopher Mims in the Wall Street Journal on October 5, 2014, was titled \”Tech World Vexed by Slow Progress on Batteries.\” Mims starts this way:

\”There is no Moore’s law for batteries. That is, while the computing power of microchips doubles every 18 months, the capacity of the batteries on which ever more of our gadgets depend exhibits no such exponential growth. In a good year, the capacity of the best batteries in our mobile phones, tablets and notebook computers—and increasingly, in our cars and household gadgets—increases just a few percent.\”

The reason that the battery in your smartphone or laptop lasts longer than it did a decade ago is not primarily because the battery is better, but because of innovations that allow the device to run while drawing less power. The most popular rechargeable batteries in the last  couple of decades have been lithium-ion technology. There is lots of research on battery technology, from improvements in lithium-ion to alternatives like supercapacitors. I\’m just a casual onlooker to this research effort. But after a few decades of reading short articles and press releases about how this or that approach is sure to revolutionize batteries, I\’ve grown cynical about the prospects for extraordinary order-of-magnitude progress.  

Thus, I was startled and intrigued by a short article authored by Björn Nykvist and Måns Nilsson
and called \”Rapidly falling costs of battery packs for electric vehicles\” which appears in Nature Climate Change, published online on March 23, 2015 (v. 5, pp. 329–332). Only the figures from the article are freely available on-line, although readers may be able to get access through a library subscription. 
For electric cars to be truly cost-competitive with gas-fueled vehicles,  battery costs need to drop dramatically. The rule-of-thumb has been that the cost of the battery pack in an electrical car needs to drop to $150 per kilowatt/hour or less.  A few years back, it was standard to read that battery packs in electric cars were costing $700 per kilowatt/hour or more. Given the historically slow pace of progress in battery technology, it looked as if achieving these costs savings might be three or four decades away. 
However, Nykvist and Nilsson argue that in the last few years, progress toward better batteries has been much  more rapid. They collect 80 different estimates of battery-pack costs for electric cars from 2007-2014, and the pattern seems to be that costs are falling much more quickly than expected. As they write: \”Cost estimates (N=85) included are from peer reviewed papers in international scientific journals; the most cited grey literature, including estimates by agencies, consultancy
and industry analysts; news items of individual accounts from industry representatives and experts; and, finally, some further novel estimates for leading BEV [battery electric vehicle] manufacturers.\” Here\’s the figure illustrating their estimates: 

They argue that the market leaders for electric cars have already reached a cost of $300 per kilowatt-hour–that is, they aren\’t just writing with another set of predictions for how batteries will improve, but arguing that they have already improved. Further, they note that global sales of battery electric vehicles are doubling annually. This sharp rise in volume seems to me making it worthwhile to push harder on the R&D specifically related to these kinds of batteries, including \”anode and cathode materials, separator stability and thickness, and electrolyte composition.\” They argue that when these factors are combined with efficiencies from economies of scale, annual productivity gains of 12-14% are conceivable in the next few years, although they view 8% annual growth as more likely.

On this trajectory, nonsubsidized electric vehicle would be commercially viable in about a decade. If gasoline prices rise again in a sustained way, it could be sooner. Moreover, it seems very likely to me that improvements in batteries for electric cars will at a minimum spill over to robotics applications, and probably to other uses of batteries as well–like the ability to charge your car battery using solar panels or wind power. The usual caveats apply. Past performance is no guarantee of future results. If a battery is charged by electricity generated at a coal-burning or oil-burning generator, the environmental gains may not be large. The materials used to make batteries pose their own environmental risks. But still, it\’s the best news about better batteries I\’ve seen in awhile.

P.S. After this post was published, several regular readers sent along recent articles about new advances in battery technology. For example, Stanford researchers recently announced an aluminum-ion battery. Here\’s a newspaper article mentioning the aluminium battery, along with research on nanotube-based batteries, sulfur-based batteries, metal-air batteries, and solid-state batteries. Maybe some of these will pan out. But as mentioned in the post, I\’ve been following battery technology in a loose way since the 1980s, and there has been a steady stream of announcements like these that, for one reason or another, turned out not to be commercially viable in the short-term or medium-term. Maybe this time will be different?

Dani Rodrik on Growth, Development, and Modelling

Aaron Steelman\’s interview with the preternaturally thoughtful Dani Rodrik appears in Econ Focus, published by the Federal Reserve Bank of Richmond (Third Quarter 2014). Here are a few excerpts:

On making policy in a second-best world:

[W]e\’re always in a second-best world. We\’re always forced to think about reform strategies that will work in the world as we find it, not in the world we would like to have. Suppose you\’re in a setting where the rule of law and contract enforcement are really weak. And you realize that they don\’t change overnight. Are you better off promoting the set of policies that presume that rule of law and contract enforcement will take care of themselves, or are you better off recommending a strategy that optimizes against the background of a weak rule of law? And I say that the evidence is that you do much better when you do the second.

The best example is China. Its growth experience is full of these second-best strategies, which take into account that they have, in many areas, weak institutions and a weak judicial system, and therefore they couldn\’t move directly to the kinds of property rights we have in Europe and the United States. And yet they\’ve managed to provide incentives and generate export-orientation in ways that are very different from how we would have said they ought to have done it, which would have been to simply open up their economy or privatize their enterprises. There, second-best strategies have been very effective. The same can be said of Vietnam, say, or farther afield, a country like Mauritius. …

The point about second-best outcomes is just a warning that you better do your homework and make sure that the second-best interactions are the wind behind you rather than the wind that\’ll be slowing you down.

I can give you examples where I think the standard recipe worked very well. Poland in 1990 did the most amazing cold turkey reforms. It opened up its economy, removed its subsidies, and removed price controls, all virtually overnight. And they did rather well, but there were a number of things that were specific to the Polish context that supported that — it had membership in the European Union as a carrot, and it received a stabilization fund to underpin the zloty. When Russia tried to do the same, it didn\’t work, because there were many things that were missing in that context compared to the Polish.

On nation-states in a global economy

Now, one can envisage a world economy where those institutions are provided not by the nation-state but by some global institutions. Conceptually, there is no reason why we can\’t have those, in which case the nation-state might become no more important than the state governments of Kansas or Nebraska are to the U.S. economy. But unless we have something like that, all we have is the nation-state. So it\’s very important for the health of markets – national and global — that the nation-state be healthy, that it be able to provide those functions. That necessarily means that economic globalization is something we can push only so far, because if we push it so far that you weaken the nation-state, it cannot provide these functions anymore — in fact you are undermining the stability and function of markets as well.

I think the financial crisis has made us see this a little bit better at least in the context of financial regulation, which is moving in a much more robust way into the national domain. But the lesson extends beyond financial regulation to many of the market-supporting functions of the nation-state.

On how economic science works and the role of models

The root of it is the problem that the profession has more or less the wrong idea about how economics as a science works. If you ask most economists, \”What kind of a science is economics?,\” they will give a response that approximates natural sciences like physics, which is that we develop hypotheses and then we test them, we throw away those that are rejected, we keep those that cannot be rejected, and then we refine our hypotheses and move in their direction.

This is not how economics works — with newer and better models succeeding models that are older and worse in the sense of being empirically less relevant. The way we actually increase our understanding of the world is by expanding our collection of models. We don\’t throw out models, we add to them; the library of models expands. Social reality is very different from natural reality in that it is not fixed; it varies across time and place. The way that an economy works in the Congo is very different from the way that it works in the United States. So the best that we can do as economists is try to understand social reality one model at a time. Each model identifies one particular salient causal mechanism, and that salient effect might be very strong in the Congo but it may be very weak at any point in time in the United States, where we may need to apply a different model.

If you look at the progress of economics all the way from perfect competition to imperfect competition, from incomplete information to behavioral economics, at every step we have said, \”Here are some additional realities for which we need newer models.\” Behavioral economics doesn\’t mean that we want to ignore models in which people are rational. There are plenty of settings where presuming people are behaving rationally is still the right way to go.

When you look at economics in that way, as a collection of models, then what does it mean to say that economics knows something about the world? Economists know how to think about various causal mechanisms that operate as part of social reality, but what they\’re very bad at in practice is navigating among the models describing them. How exactly do I pick the right model for a given setting? This is a craft because the evidence never settles it in real time. We have these periods of fads where we say the New Keynesian or the Neoclassical model explains everything. We lose sight of the fact that models are highly context-specific and we need to be syncretic, simultaneously carrying many models in our mind.