How Has Structured Finance Evolved?

As Mahmoud Elamin and William Bednar of the Cleveland Fed point out: \”Structured finance has been vilifi ed as the culprit behind the worst recession since the Great Depression. Every aspect of its design has been disparaged: faulty underlying loans, bad incentives for originators, dubious AAA ratings and mispriced risks.\” In the March 2012 issue of Economic Trends, Cleveland Federal Reserve, they update the story by asking: \”How Is Structured Finance Doing?\”

Start with defining terms: \”Structured finance securities are debt instruments collateralized by a securitization pool of loans. The pool’s cash inflow supports the cash outflow to pay the securities off. The securities are divided into multiple tranches characterized by their seniority. The most senior tranche is paid first; the second senior gets paid only after the first senior is paid and so on. Investors buy the tranche that best fits their risk appetites. We look at three products that fall under the general
heading of structured finance: mortgage-backed securities (MBS), asset-backed securities (ABS),
and collateralized debt obligations (CDO). MBS are backed by mortgages, ABS are backed by assets
such as credit card loans, auto loans, student loans, and the like, while CDO are backed by investment grade loans, high-yield loans, other structured finance products, and the like.\”

What happened in each of these three categories? In the first category, the mortgage market, the total value of mortgage originations dropped off after about 2003. However, the share mortgage originations that were packaged as securities has continued to rise. Here are a couple of illustrative figures.


Why has the share of mortgages packaged as securities continued to rise? Elamin and Bednar name three possible reasons, but don\’t try to quantify them: a rise in private demand for such instruments, polices of government-sponsored enterprises like Fannie Mae and Freddie Mac, and the Federal Reserve \”quantitative easing\” policies, which have involved direct purchase of about a $1 trillion in mortgage-backed securities.

The second broad category of securitized finance is asset-backed securities. The biggest categories here are securities backed by auto loans and by credit card loans, with securities backed by student loans as another large category. Issuance of asset-backed securities dropped off by about half after 2006. In addition, the share of total auto-loan debt that is securities fell from above 40% to 30%, while the share of credit card debt repackaged as asset-backed securities fell from more than 30% to around 15%.

The third category is collateralized debt obligations. This is the category of structured finance most thoroughly implicated in the housing price bubble. Issuance of these securities rose from less than $100 billion in 2003 to about $500 billion in both 2006 and 2007, at the peak of the housing bubble, and since has fallen to near-zero. In addition, these collateralized debt obligations at the peak were largely based on mortgages, especially subprime mortgages. These were the financial instruments that started off with subprime mortgages, and then were divided into tranches. The junior tranches agreed to take the first of any losses that arose. Thus, the senior tranches–seemingly protected by the junior tranches–managed to get AAA credit ratings, and thus regulators let banks hold these \”safe assets.\” When the housing bubble burst, and many of these subprime mortgages went sour, the popping of the housing market bubble had leaked into the banking system. Today, CDOs aren\’t based on housing; instead, what remains of the market is main involve securitizing investment-grade bonds and high-yield loans.

Want to Watch Bernanke Lecture?

From a Federal Reserve press release:

\”In March 2012, Chairman Ben S. Bernanke will deliver a four-part lecture series about the Federal Reserve and the financial crisis that emerged in 2007. The series begins with a lecture on the origins and missions of central banks, followed by a lecture that will discuss the role and actions of the Federal Reserve in the period after World War II. In the final two lectures, the Chairman will review some of the causes of, and policy responses to, the recent financial crisis, focusing specifically on the actions of the Federal Reserve.\”

The first lecture is today at 12:45 EST. Here\’s the schedule for all four lectures. Of course, you don\’t need to watch live. You can watch later, or wait until a transcript is available. For details, go to the link above.These lectures are being delivered to an undergraduate course at the George Washington University School of Business, so I expect that they will be pedagogical in tone and focus on giving a lot of background information–not on breaking news about imminent changes in monetary policy. But for teachers and students, inside academia and out, it\’s a chance to hear it all from the horse\’s mouth.

Lecture 1: Origins and Mission of the Federal Reserve
Watch live on March 20, 2012 12:45 p.m. ET

Lecture 2: The Federal Reserve after World War II
Watch live on March 22, 2012 12:45 p.m. ET

Lecture 3: The Financial Crisis and the Great Recession
Watch live on March 27, 2012 12:45 p.m. ET

Lecture 4: The Aftermath of the Crisis
Watch live on March 29, 2012 12:45 p.m. ET

Public Higher Education Gets Less State and Local Support

The association of State Higher Education Executive Officers has published their report on \”State Higher Education Finance FY 2011.\” The basic story is rising enrollments in public institutions of higher education, but falling per-student support.

The blue bars in the figure show educational appropriations for public higher education [per full-time equivalent student, adjusted for inflation. The support starts relatively high at $8,156 per student in 1987), sags in the early 1990s to $7,054 in 1993, rises again in the late 1990s and early 2000s as high as $8,316 in 2001, drops off in to $6,875 in 2005, rises to $7,488 in 2008, and now has dropped off to $6,290 in 2011.

Meanwhile, tuition revenue per full-time student is gradually rising. Overall, it rises from $2,422 in 1986 to $4,774 in 2011.

And over these 25 years, the number of full-time equivalent students in public higher education has risen from about 7 million back in 1986 to almost 12 million in 2011.

Put these together, and here\’s tuition as a share of public education total revenue, rising from 23.2% back in 1986 to 43.3% at present.

This pattern may be here to stay. As the report states: \”In the past decade these two recessions and the larger macro-economic challenges facing the United States have created what some are calling the “new normal” for state funding for public higher education and other public services. In the “new normal” retirement and health care costs simultaneously drive up the cost of higher education, and compete with education for limited public resources. The “new normal” no longer expects to see a recovery of state support for higher education such as occurred repeatedly in the last half of the 20th century. The “new normal” expects students and their families to continue to make increasingly greater financial sacrifices in order to complete a postsecondary education. The “new normal” expects schools and colleges to find ways of increasing productivity and absorb ever-larger budget cuts, while increasing degree production without, we hope, compromising quality.\”

I would add only a couple of thoughts:

1) Almost everyone believes, or claims to believe, that the economic future of the United States is intertwined with building greater human capital. But that isn\’t reflected in our spending choices. I\’d be the first to say that spending isn\’t everything–but it\’s something! Here\’s a post from July 19, 2011, on \”How the U.S. Has Come Back to the Pack in Higher Education.\” The U.S. used to be the world leader in share of population going to higher education, but no longer.

2) The main budgetary mechanism for encouraging additional higher education is using student loans. This avoids adding to direct spending for higher education, but places a greater share of the risk of not completing a degree, or not having the degree lead to a well-paid job, on the student. Also, public higher education isn\’t expanding fast enough to absorb those who want to try college, so many of those who receive these loans are headed to the for-profit educational system. Here\’s a February 23, 2012, post on \”For-Profit Higher Education.\”

Minnesota: Paying More Federal Taxes, Receiving Less Federal Spending

Here\’s an op-ed column of mine that appeared in the (Minneapolis) Star Tribune on Sunday, March 18. I\’ll put the opening paragraphs here, and all of it below the fold:

\”Minnesota taxes: More blessed to give?\”
\”Minnesota pays its fair share — and then some — in federal taxes, while federal spending here is decidedly below average. Why the imbalance?\”

The great state of Minnesota rides in a lifeboat with 49 other states, tossed by the wind and waves of global politics and the global economy.

States vary in many ways — population, size of the state economy, age distribution, industry mix, geography. No one should expect that they will all make the same contribution to keeping the lifeboat afloat.

But still, it\’s eyebrow-raising to discover that Minnesota is one of the states consistently putting a lot more into the federal budget than it gets back. That\’s the message when you compare federal taxes paid by residents and businesses within each state with federal spending in each state.

\”Minnesota taxes: More blessed to give?\”
\”Minnesota pays its fair share — and then some — in federal taxes, while federal spending here is decidedly below average. Why the imbalance?\”
Article by: TIMOTHY TAYLOR

The great state of Minnesota rides in a lifeboat with 49 other states, tossed by the wind and waves of global politics and the global economy.
States vary in many ways — population, size of the state economy, age distribution, industry mix, geography. No one should expect that they will all make the same contribution to keeping the lifeboat afloat.
But still, it\’s eyebrow-raising to discover that Minnesota is one of the states consistently putting a lot more into the federal budget than it gets back. That\’s the message when you compare federal taxes paid by residents and businesses within each state with federal spending in each state.
The most recent data seems to be for 2009. The U.S. Census Bureau, in its Consolidated Federal Funds report, breaks down domestic federal spending to the state level.
It includes government payments to individuals, procurement, grants, and the salaries and wages of federal employees. Minnesota received $45.7 billion in federal spending in 2009.
On the tax side, the Internal Revenue Service Data Book for 2009 reports that gross federal taxes collected from Minnesota in 2009 were $67.6 billion.
This includes all federal taxes: individual and corporate income taxes; payroll taxes for Social Security and Medicare, and estate, gift and excise taxes. Minnesota has an above-average per capita income, and so it pays more than average in federal taxes.
Do the math: In 2009, Minnesota received about 68 cents in federal spending for every $1 paid in federal taxes. Putting the tax and spending numbers in per-capita terms is especially striking.
For the United States as a whole, federal spending was $10,395 per person in 2009.
For Minnesota, federal spending was $8,676 per person — about 16 percent below the average.
For the United States as a whole, federal taxes paid were $7,690 per person in 2009.
In Minnesota, federal taxes paid were $12,763 per person — about 66 percent above average. (Of course, the U.S. government had an enormous budget deficit in 2009, so the average spending per person far exceeded the average taxes per person.)
This general pattern has held for a number of years. Analysts at the Tax Foundation calculated that if you rank states on the basis of federal taxes paid per capita, Minnesota ranked from 11th to 15th over the years from 2001 to 2005. But when states are ranked on the basis of federal spending per capita, Minnesota ranked from 45th to 48th.
In the Tax Foundation\’s combined rankings — that is, federal spending received per dollar of federal taxes paid — Minnesota ranked from 45th to 48th over the 2001-2005 time period.
Writers at the Economist magazine performed similar research last summer. By their calculations, over the 20-period from 1990 to 2009, this gap between federal taxes paid by Minnesotans and federal spending received by Minnesotans added up to the equivalent of two years worth of gross state product for Minnesota.
By this measure, Minnesota ranked 49th among the states over this time in federal spending received relative to federal taxes paid, ahead of only Delaware.
What leads to a situation where a state is consistently sending more to the federal government than it is receiving back? Or vice versa?
Along with Minnesota and Delaware, other states with a habit of paying more in federal taxes than they receive in federal spending include New Jersey, Illinois, Connecticut, New York and New Hampshire.
States typically at the other end of the range, with a pattern of more federal spending received than taxes paid, include New Mexico, Mississippi, West Virginia, Alabama, North Dakota and Alaska.
No one explanation applies equally to all of these states, but following are four main factors.
Income. States with a pattern of paying more to the federal government than they receive are all above-average in income. Also, Delaware, New York and Illinois are all places with large numbers of major corporate headquarters, thus boosting the corporate taxes from those states.
Poverty. States with fewer people below the poverty line have less need for federal support through antipoverty programs like food stamps or Medicaid. Minnesota ranks fourth among states for lowest poverty rate. New Hampshire, Delaware, New Jersey and Connecticut all rank in the top 10 for lowest poverty rate. On the other side, Mississippi, New Mexico, Alabama and West Virginia all rank among the top eight in states with the highest poverty levels.
Elderly. A large proportion of elderly means a group whose members typically have lower incomes and are receiving Social Security and Medicare benefits. Minnesota ranks 40th among states in share of population older than 65. West Virginia and North Dakota are in the top five.
Defense. Spending on defense is quite unevenly distributed across states. Some Minnesota companies have defense contracts, but the state does not have large military bases, national research laboratories or the truly enormous defense contractors. Defense spending in 2009 averaged $1,753 per person for the United States as a whole; in Minnesota, federal defense spending averaged $786 per person.
Of course, it would be silly to argue that every state should receive just as much in federal spending as it contributed in tax revenue. The United States is more than the sum of 50 states.
Even if Minnesota doesn\’t have large military bases and ports, people here benefit from national defense spending.
If someone spends their working life in Minnesota, then retires and draws Social Security in New Mexico, it doesn\’t seem unfair in the least. States with higher incomes should pay more in taxes, and states with higher poverty should receive more federal support.
Nonetheless, Minnesota\’s status as a state that consistently pays more to the federal government than it receives should make us all ponder.
Higher federal spending on national defense, antipoverty programs, transportation, health insurance and more will typically lead to a situation in which Minnesota taxpayers will be paying more to support those programs than Minnesotans are going to receive in direct spending.
Federal tax cuts, by the same logic, will tend to disproportionately benefit Minnesota; federal tax increases will disproportionately burden Minnesota.
In a number of cases, a narrowly Minnesota perspective suggests that it would be more in our state\’s collective interest to pursue state-level taxing and spending policies, rather than supporting national policies in which Minnesota\’s federal tax dollars will exceed federal spending within Minnesota.
As an American, I don\’t advocate a purely Minnesota-centric perspective on federal spending and taxes. But as a Minnesotan, I find myself paraphrasing the sentiments of Tevye from the old musical \”Fiddler on the Roof\”:
\”Of course, there\’s no shame in being consistently near the bottom of state rankings comparing federal spending received to federal taxes paid.
\”But it\’s no great honor, either.\”
———-
Timothy Taylor is managing editor of the Journal of Economic Perspectives, based at Macalester College in St. Paul. He blogs at conversableeconomist.blogspot.com.

The Food Stamp Explosion

 For starters, Food Stamps have a new name. The 2008 farm bill changed the name to the Supplemental Nutrition Assistance Program, or SNAP. But whatever the name, enrollment rose from 17.3 million in 2001 to 46.2 million in October 2011. In the March 2012 issue of Amber Waves, published by the U.S. Department of Agriculture,  Margaret Andrews and David Smallwood ask: \”What’s Behind the Rise in SNAP Participation?\”


 What\’s perhaps less expected in the graph is that Food Stamp enrollment was rising steadily from 2001 up through 2006, although unemployment rates were low and falling during much of that time. The authors trace much of this change to changes in federal rules making it easier for people to apply, and easier for states to certify to the federal government that the benefits are being targeted. In addition, SNAP benefit levels were increased both in the 2008 farm bill and in the 2009 \”stimulus\” legislation, making it more attractive to apply. Here\’s a graph showing the maximum SNAP benefit for a household of four and the average benefit.

When you look at the numbers, Food Stamps play what may be a surprisingly large role in America\’s social safety net for the poor. Total spending on Food Stamps in 2011 was about $78 billion. According to the Center on Budget and Policy Priorities, \”Roughly 93 percent of SNAP benefits go to households with incomes below the poverty line, and 55 percent go to households with incomes below half of the poverty line …\”

For comparison, federal expenditures through the Earned Income Tax Credit were about $56 billion in 2011. As another comparison, total spending on Temporary Assistance for Needy Families (TANF), what is what most people mean by \”welfare,\” was about $33 billion in combined federal and state spending in 2010. In many states, SNAP far outstrips TANF in the level of support it provides for low-income families.

Top Marginal Tax Rates: 1958 vs. 2009

Top marginal income tax rates used to be much higher back in the 1950s and 1960s. How much revenue did those higher tax rates actually collect? Daniel Baneman and Jim Nunns address that question in a short report,\”Income Tax Paid at Each Tax Rate, 1958-2009,\” published by the Tax Policy Center last October.

 For starters, take a look at the statutory tax brackets for 1958 and 2009. The The tax brackets are adjusted for inflation, so the horizontal axis is constant 2009 dollars. The top statutory tax rate in 2009 was 35%; back in 1958, it was about 90%.  Marginal income tax rates are lower across the income distribution in 2009. In addition, the top marginal tax rate occurs much lower in the income distribution in 2009 than it did in 1958.

How many households actually paid these rates? Here\’s a figure showing the share of taxpayers facing different marginal tax rates. At the bottom, across this time period, roughly 20% of all tax returns owed no tax, and so faced a marginal tax rate of zero percent. Back in 1958, the most common marginal tax brackets faced by taxpayers were in the 16-28% category; since the mid-1980s, the most common marginal tax rate faced by taxpayers has been the 1-16% category. Clearly, a very small proportion of taxpayers actually faced the very highest marginal tax rates back 1958. It\’s interesting to note how the share of taxpayers facing higher marginal rates expanded substantially in the 1970s, probably due in large part to \”bracket creep\”–that is, tax brackets at that time didn\’t increase with the rate of inflation, so as wages were driven up by inflation, you were pushed into higher tax brackets even though real income had not increased.    

How much revenue was raised by these high marginal tax rates? Although the highest marginal tax rates applied to a tiny share of taxpayers, marginal tax rates above 39.7% collected more than 10% of income tax revenue back in the late 1950s. It\’s interesting to note that the share of income tax revenue collected by those in the top brackets for 2009–that is, the 29-35% category, is larger than the rate collected by all marginal tax brackets above 29% back in the 1960s.

A few quick thoughts:

1) Perhaps it goes without saying, but there\’s no reason to think that 1958 was the high point of social wisdom when it comes to tax policy. In addition, the economy has evolved considerably since 1958: talent and tasks are probably more mobile, and methods of categorizing income in ways that affect tax burdens have become more sophisticated. Also, the distribution of income has become much more unequal in recent decades, and so arguments over the appropriate share of taxes to be paid by those in the top income groups have evolved as well.

2) Raising tax rates on those with the highest incomes would raise significant funds, but nowhere near enough to solve America\’s fiscal woes. Baneman and Nunns offer this rough illustrative estimate: \”If taxable income in the top bracket in 2007 had been taxed at an average rate of 49 percent, income tax liabilities (before credits) would have been $78 billion (6.7 percent of total pre-credit liabilities) higher, taking into account likely taxpayer behavioral responses to the rate increase.\” The behavioral response they assume is that every 10% rise in tax rates causes taxable income to fall by 2.5%.
 

3) If one wants to use the 1958 example as a precedent, it would be fair to point out that the lowest-bracket income tax rates are a fairly new development, as of the mid-1980s. One could also use the example of 1959 to argue that many more taxpayers in the broad range of lower- and middle-incomes should face marginal federal tax rates in the range of 16-28%.

4) If the goal is to raise more tax revenue from those with high incomes, higher tax rates are not the only method of doing so. For example, one could limit various tax deductions that apply with greatest force to those high up in the income brackets. One could also look at ways in which the tax code lets those with high incomes pay lower rates, like the lower tax rates for capital gains and on tax-free investments like state and local bonds.

What Should Banks Be Allowed To Do?

Charles Morris offers a nice overview of the course of bank regulation in the last century or so in \”What Should Banks Be Allowed To Do?\” It appears in the Fourth Quarter 2011 issue of the Economic Review published by the Federal Reserve Bank of Kansas City.

For me, the article serves two useful purposes. First,  it\’s a reminder of why bank deregulation in the 1980s and 1990s wasn\’t some clever ploy by the financial-sector lobbyists, but was absolutely necessary given the evolution of the industry at that time. To be sure, the deregulation could have been carried out in different ways, posing different risks, but some kind of deregulation was unavoidable. Second, it makes the case for limiting what banks are allowed to so. I\’m completely persuaded that the proposed reform would make the banking sector safer and with less risk of needing a bailout, but I\’m less sure that the reform would make the financial sector as a whole safer. Let me say a bit more about each of these, drawing heavily on Morris\’s exposition.

The banking sector as it emerged from the 1930s had five characteristics salient for the discussion here: 1) it was overseen by bank regulators for safety; 2) it had access to a public safety net of emergency loans from the Fed and deposit insurance; 3) it was forbidden to go into other financial areas like investment banking, securities dealing, or insurance; 4) it faced legal limits on the insurance it could pay on deposits; and 5) it faced geographic restrictions on branching across state lines and within states. In short, it was an industry that was shielded from competition, limited in what it could do, and heavily regulated.

In the 1970s, the wheels began to come off this wagon. Those who wished to save money began to seek out investment options like mutual funds, including money market mutual funds, and insurance companies. Banks were limited in the interest rate they could pay, and inflation was high. Banks began to hemorrage deposits. Those who wished to borrow money found other options, too. They borrowed through commercial paper, through high-yield bonds, and through securitized markets including mortgage-backed securities and asset-backed securities. Separate finance companies made car loans and loans for retail purchases. Other companies financed trade receivables.

In short, both the savers and the borrowers were migrating outside the banking industry. Instead, the process of financial intermediation between savers and borrowers was happening outside the banking industry, in what came to be called the \”shadow banking\” sector.  If the banks had not been deregulated and allowed to compete in this new financial sector–at least in some ways–the banks themselves would have shrunk dramatically and a very large part of the U.S. saving and borrowing would have passed completely outside the purview of the bank regulators.

As banks were allowed to compete across the financial sector more broadly, starting in the 1980s, the industry began to consolidate. This made some sense: when banks were allowed to open branches across states and across state lines, for example, not as many small banks were needed. But the top banks not only became very large, but an ever-growing share of their assets were outside the traditional business of banking. Here\’s how Morris summarizes how the industry evolved (footnote omitted):

\”Technological improvements, interstate banking, and the GLB [Graham-Leach-Bliley] Act resulted in fewer banks and a much more concentrated banking industry, with the largest BHCs [bank holding companies] ultimately engaging in more varied and nontraditional activities. For example, the number of banks fell from about 12,500 in 1990 to about 6,400 in 2011. The share of industry assets held by the 10 largest BHCs rose from about 25 percent in 1990 to about 45 percent in 1997 (just before the GLB Act) and to almost 70 percent in 2011. The share of loans and deposits of the top 10 BHCs
also rose sharply (Table 1). In addition, only four of the 10 largest BHCs that existed before the passage of the GLB Act remain today (Citigroup, JPMorgan Chase, Bank of America, and Wells Fargo), with those four BHCs having acquired five of the other top 10 BHCs.

\”Table 1 also shows how the activities of the 10 largest BHCs have changed in the past 14 years. In 1997, the share of banking assets relative to total assets at these companies was 87 percent, with only one company having a share less than 80 percent. Today, the share of banking assets is 58 percent, with only two BHCs having a share greater than 80 percent.\”

Morris\’s diagnosis and proposed solution are straightforward. Bank holding companies have gotten into too many risky financial activities, and so should be restrained. But Morris is also clearly and sensibly aware that just trying to turn back the clock to 1930s-style regulated banking isn\’t possible. That toothpaste is out of the tube. He suggests that banks be allowed to pursue three areas of business:

  • Commercial banking—deposit taking and lending to individuals and businesses.
  • Investment banking—underwriting securities (stocks and bonds) and providing advisory services.
  • Asset and wealth management—managing assets for individuals and institutions.

 Conversely, Morris argues that banks be barred from three other areas:

  • Dealing and market making—intermediating securities, money market instruments, and over-the-counter derivatives transactions for customers.
  • Brokerage services—brokering for retail and institutional investors, including hedge funds (prime brokerage).
  • Proprietary trading—trading for an organization’s own account and owning hedge and private equity funds.

The key distinction here for Morris is that underwriting securities and providing advice are largely fee-based services. They don\’t involve putting much of a bank\’s capital at risk. Dealing, market-making, hedge funds, and private equity all involve taking risks with capital that are harder both for the institution to understand and for regulators to monitor.

Morris\’s proposal is certainly sensible enough, but it does leave me with a couple of questions. First, if banks were holding lots of mortgage loans, as they clearly could be under Morris\’s proposal, then they would have been vulnerable to a meltdown of housing prices like the one that has occurred. Thus, it\’s not clear to me that anything in this proposal would have limited the very aggressive home lending that occurred or the price meltdown afterward. Indeed, the sort of limited banks Morris advocates might in some ways have been relatively even more exposed to losses in the housing market.

Second, Morris\’s proposal, like all \”narrow bank\” proposals, would clearly make the banking sector safer. But one of the disturbing facts about the financial troubles of 2008 was that it wasn\’t just commercial banks that were deemed to be systemically important to the U.S. economy: it was also investment banks like Bear Stearns, money market funds, insurance companies like AIG, brokers that sell Treasury bonds, and others. Focusing on banking is all very well, but the shadow banking sector and the potential risks that it poses aren\’t going away.

The Mundane Cost Obstacle to Nuclear Power

I\’ve long believed that the main problems with expanding nuclear power related to health and safety concerns: for example, the small chance of a plant malfunctioning, along with issues related to waste disposal and possible links between nuclear power technology and weapons technology. But Lucas Davis argues persuasively in \”Prospects for Nuclear Power\” in the Winter 2012 issue of my own Journal of Economic Perspectives, which is freely available on-line courtesy of the American Economic Association, that I\’ve been assuming too much. Here\’s Davis (citations omitted):

\”Nuclear power has long been controversial because of concerns about nuclear accidents, storage of spent fuel, and about how the spread of nuclear power might raise risks of the proliferation of nuclear weapons. These concerns are real and important. However, emphasizing these concerns implicitly suggests that unless these issues are taken into account, nuclear power would otherwise be cost effective compared to other forms of electricity generation. This implication is unwarranted. Throughout the history of nuclear power, a key challenge has been the high cost of construction for nuclear plants. Construction costs are high enough that it becomes difficult to make an economic argument for nuclear even before incorporating these external factors. This is particularly true in countries like the United States where recent technological advances have dramatically increased the availability of natural gas. The chairman of one of the largest U.S. nuclear companies recently said that
his company would not break ground on a new nuclear plant until the price of natural gas was more than double today’s level and carbon emissions cost $25 per ton. This comment summarizes the current economics of nuclear power pretty well. Yes, there is a certain confluence of factors that could make
nuclear power a viable economic option. Otherwise, a nuclear power renaissance seems unlikely.\”

The argument from Davis is complemented by some other recent discussions of nuclear power. The
Federation of American Scientists has a report out on  The Future of Nuclear Power in the United States, edited by Charles D. Ferguson and Frank A. Settle. The most recent issue of the Economist magazine (March 10) has a 14-page cover story on \”Nuclear Power: The Dream that Failed.\”
Finally, a Report to the Secretary of Energy by the Blue Ribbon Commission on America\’s Energy Future was released in late January.

Here is some basic background from Davis in his JEP article. The first figure shows nuclear power plants under construction around the world. Notice that the plants under construction in the United States and western Europe dropped off to near-zero in the 1990s. The recent spike in plants under construction is driven by the \”other\” category, which is largely China, but it remains to be seen how many of these plants will end up being completed.

The next figure shows rising costs of constructing nuclear power plants in the United States. The costs are per kilowatt-hour of capacity, and so adjusted for size. The costs are also adjusted for inflation.

Finally, this figure shows the slowdown in construction times–for example, plants started in the 1960s were completed in 8.6 years while those completed in the 1970s took 14.1 years. Moreover, there was growing uncertainty as to whether a nuclear power plants would be completed: 89% of plants announced in the 1960s were completed, compared with only 25% of those announced in the 1970s being completed. 

Of course, it\’s not possible to separate cleanly the safety concerns over nuclear power from these cost issues: additional safety precautions–and the accompanying paperwork–are part of what drives up costs. But perhaps the more fundamental story here is that technological progress in nuclear power hasn\’t been increasing fast enough to assuage concerns about safety and to drive down costs. Stephen Maloney digs into this in some detail in Chapter 2 of the FAS report, \”A Critical Examination of Nuclear Power\’s Costs.\”

\”Since the nuclear industry’s inception more than 50 years ago, its forecasts for costs have been consistently unreliable. The “first generation” plants, comprising both prototype reactors and the standard designs of the 1950s-1960s, failed to live up to promised economics. This trend continued with the construction of Generation II plants completed in the 1970s, which make up the present nuclear fleet.

\”First, the total costs were far higher than for coal-generated electricity. In particular, the capital cost of nuclear plants built through 1980 were, on average, 50 percent higher than comparably-sized coal-fired plants, adjusting for inflation and including backfits to meet Clean Air Act standards. Second, there were extraordinary cost escalations over the original low cost promises. Nuclear plant construction costs escalated approximately 24 percent per calendar year compared to 6 percent annual escalation for coal plants. Third, the economies of scale expected were not achieved in the Generation II designs. The scale-up of nuclear plants brought less than half the economic efficiencies projected.

\”In addition, over 120 nuclear units, approximately half the reactors ordered, were never started or cancelled. The total write-offs were more than $15 billion in nominal dollars. … In the late 1970s, the Atomic Industrial Forum (AIF), predecessor to the Nuclear Energy Institute, identified the main drivers of unmet expectations as growing understanding of nuclear accident hazards, failure of regulatory standardization policies, and increased documentation standards to ensure as-built plants actually met
safety standards. The combined effects doubled the quantities of materials, equipment, and labor needed, and tripled the magnitude of the engineering effort for building a nuclear power plant.\”

Of course, it\’s possible to sketch business and technological scenarios under which nuclear power plants of the future use simpler, safer designs, which combine with economies of scale in production to drive down costs. But such predictions haven\’t held true over the history of nuclear power, and they don\’t seem to be holding true recently, either. Here\’s one of many examples, from Maloney:

\”In June 2006, a consortium of companies announced plans to build two more reactors at the South Texas Project site for an estimated cost of $5.2 billion. NRG, the lead company, made history by becoming the first company to file an application with the NRC. CPS Energy, a municipal utility, was one of its partners. In October 2007, CPS Energy’s board approved $206 million for preliminary design and engineering. In June 2009, NRG revised the estimate to $10 billion for the two reactors, including finance charges. A few weeks later, this estimate rose to $13 billion, including finance charges. Later that year, the estimate reached $18.2 billion …\” Cost overruns of similar magnitude aren\’t just a U.S. phenomenon; for example, they also have occurred at recent nuclear power projects in France and in Finland.

To be sure, there are promising new nuclear technologies out there. One hot topic is small modular reactors, discussed both in the Economist article and by Daniel Ingersoll in Chapter 10 of the FAS report. But at some point, a degree of skepticism seems appropriate. The Economist has a wonderful quotation from Admiral Hyman Rickover, who drove the process that created America\’s nuclear submarines, and commented back in the 1950s:

\”An academic reactor or reactor plant almost always has the following basic characteristics: (1) It is simple. (2) It is small. (3) It is cheap. (4) It is light. (5) It can be built very quickly. (6) It is very flexible in purpose. (7) Very little development will be required. It will use off-the-shelf components. (8) The reactor is in the study phase. It is not being built now. On the other hand a practical reactor can be distinguished by the following characteristics: (1) It is being built now. (2) It is behind schedule. (3) It requires an immense amount of development on apparently trivial items. (4) It is very expensive. (5) It takes a long time to build because of its engineering development problems. (6) It is large. (7) It is heavy. (8) It is complicated.\”

Finally, arguments over appropriate disposal of nuclear waste will surely continue. For an overview of these issues, a useful starting point is the Report to the Secretary of Energy by the Blue Ribbon Commission on America\’s Energy Future that was released in late January. Personally, I didn\’t find the Commission report to be especially encouraging about resolving these issues. For example, the first recommendation is to start a process of encouraging communities to volunteer for being nuclear waste disposal sites, which they think might take 15-20 years. Having just watched the argument over a possible repository at Yucca Mountain in Nevada run for 25 years, before the decision of the Obama administration to halt that process, this time frame seems optimistic. Of course, there are alternatives: consolidated storage facilities, and technologies for processing nuclear waste. But the alternatives aren\’t cost-free, either.

Nuclear power isn\’t going away. Plants that have been working well still have several decades to run, and the marginal costs of running them are now low. Additional nuclear power plants will be built in countries where the government makes it a priority, or perhaps in some settings where other sources of power are extremely high cost. But as the U.S. enters what seems to be a time of cheap and plentiful natural gas, building a substantial number of new nuclear power plants in this country seems highly unlikely.

Small Firms and Job Creation

Most job creation comes from small firms–and so does most job destruction. The Congressional Budget Office has a useful summary of the evidence in a short March 2012 report, \”Small Firms, Employment, and Federal Policy.\”
Here are some basic facts about firm size and employment. In 2011, firms with more than 1,000 employees were 0.2% of all firms, but 38.6% of all private-sector employees. Conversely, firm with fewer than 19 employees make up 87.5% of all firms, but have 18.4% of total employees. Here\’s the detailed table.

The share of private sector employment by firm size has barely budged in recent decades. The share of employment in very small enterprises of 1-19 employees is down a few percentage points, and the share of employment at the largest employers of over 500 employees is up just a touch, but the overall stability of these patterns is remarkable.

Studies of the effect of small firms on employment often seem to reach mixed results. One reason becomes clear if you think about this figure for a moment. If small firms grow enough in size, they aren\’t small any more. So does that mean that small firms stop contributing to employment growth if they get larger? Conversely, imagine a large firm in a death spiral, bleeding employees until it becomes a smaller firm. Does this now become an example of a small firm that is losing employees? If people are laid off at big firms, and then try to start small companies, is this evidence of the dynamism of small firms–or of a sick economy?

The emphasis in recent studies of small firms and employment is in adjusting for the age of the firm.
Here\’s the CBO explanation (footnotes omitted):

\”It is widely believed that small firms promote job growth. In fact, small firms both create and eliminate far more jobs than large firms do. On balance, they account for a disproportionate share of net job growth—however, that greater net growth is driven primarily by the creation of new small firms, frequently referred to as start-ups, rather than by the expansion of mature small firms. …

\”Recent research, however, has found that it is young small firms, especially start-ups, that grow faster—and consequently create jobs at a higher rate—than either large firms or established small firms do. One study found that the smallest firms, those with between one and four employees, grew 4.7 percent faster than the largest firms, those with more than 10,000 employees. However, when the comparison is made between firms of the same age, small firms grow more slowly than large firms do.

\”Almost all firms start small. Many fail and, of those that do survive, most have no desire to expand beyond “small firm” status. Only a few grow substantially and become large firms. Thus, the faster average growth of young small firms is driven by the ambitions and successes of a fairly narrow set of start-up employers.\”

In other words, the issue isn\’t about small size, but rather about whether the firm is in a line of business that offers possibilities for dramatic expansion. Many small firms like certain retail stores or various personal and professional services, don\’t have much potential to expand substantially, and aren\’t really seeking to do so. I remember a venture capitalist, who would be expected to be sympathetic to good news about small firms, once telling me: \”The thing you need to remember about small firms is that a lot of what they do is sell to bigger firms.\” I\’m not sure that statement is true about small firms in general, but I think it was true in terms of what he was looking for: that is, firms that were currently small but had potential to link to a much wider market.

International Poverty: Progress and a Puzzle

Shaohua Chen and Martin Ravallion at the World Bank have prepared a \”briefing note\” with good news: global poverty rates are dropping.  They write: \”That means that 1.29 billion people in 2008 lived below $1.25 a day, as compared to 1.94 billion in 1981. 2.47 billion people in 2008 consumed less than $2 a day, as compared to 2.59 billion in 1981.\”

The idea of measuring poverty as $1.25 per day or $2 per day may shock some Americans. After all, In contrast, in the United States the poverty threshold in 2011 for a three-person family, single parent with two children, was $18,123, which works out to about $16.50 per person per day. But as the report explains: \”$1.25 is the average of the national poverty lines found in the poorest 10-20 countries.
Using this line, poverty in the world as a whole is being judged by what “poverty” mean
in the world’s poorest countries. Naturally, better off countries tend to have higher poverty lines than this frugal standard. $2 a day is the median poverty line for all developing countries.\”

The reduction in poverty rates reaches across all regions of the developing world. Not surprisingly, much of the most rapid reduction of poverty has come from China. As the figure shows, it used to be that if you included China with the rest of the developed world, it raised the overall poverty rate. But now, if you include China with the rest of the developed world, it reduces the overall poverty rate. Nonetheless, 173 million people in China remain below the $1.25 poverty line. As the World Bank describes it:

\”Looking back to the early 1980s, East Asia was the region with the highest incidence of poverty in the world, with 77% living below $1.25 a day in 1981. By 2008 this had fallen to 14%. In China alone, 662 million fewer people living in poverty by the $1.25 standard, though progress in China has been uneven over time. In 2008, 13% (173 million people) of China’s population still lived below $1.25 a day. In the developing world outside China, the $1.25 poverty rate has fallen from 41% to 25% over 1981-2008, though not enough to bring down the total number of poor, which was around 1.1 billion in both 1981 and 2008, although rising in the 1980s and ‘90s, then falling since 1999 …\”

The reduction in poverty rates is clearly good news, but the pattern of reduction in poverty rates across countries poses a puzzle that Martin Ravallion raises in \”Why Don’t We See Poverty Convergence?\” in the February 2012 issue of the American Economic Review. The article isn\’t freely available on-line, although many academics will have access through their libraries. He sets up the discussion of the puzzle this way\”

\”Two prominent stylized facts about economic development are that there is an advantage of backwardness, such that in a comparison of two otherwise similar countries the one with the lower initial mean income will tend to see the higher rate of economic growth, and that there is an advantage of growth, whereby a higher mean income tends to come with a lower incidence of absolute poverty. Past empirical support for both stylized facts has almost invariably assumed that the dynamic processes for growth and poverty reduction do not depend directly on the initial level of poverty. Under that assumption, the two stylized facts imply that we should see poverty convergence: countries starting out with a high incidence of absolute poverty should enjoy a higher subsequent growth rate in mean consumption and (hence) a higher proportionate rate of poverty reduction. That poses a puzzle. The data on poverty measures over time for 90 developing countries assembled for this article reveal little or no sign of poverty convergence.\”

Here is Ravallion\’s figure to illustrate the point. The horizontal axis has poverty rates for 90 countries, mostly from the 1980s and 1990s as data became available. The vertical axis shows the decline in poverty rates from the start of the data up to 2005. Notice that the best-fit line doesn\’t show that countries which started from higher levels of poverty have larger reductions in poverty: if anything, the relationship goes a bit the other way.

Ravallion puts it this way (citation omitted): \”The overall poverty rate of the developing world has been falling since at least 1980, but the proportionate rate of decline has been no higher in its poorest countries.\” This finding suggests that while poverty rates are diminishing over time, there is no particular reason based on past patterns to expect that the poverty rates will fall more quickly where poverty is greatest. Ravallion offers the clear implication: There do often seem to be \”advantages of backwardness,\” which relatively poorer countries can take advantage of global knowledge and market to reach a faster growth rate, but there is apparently also a drag of high poverty rates, in which the existence of a high poverty rate makes it harder for a country to reduce its poverty rate further. These factors tend to offset each other, and as a result, the poorer countries don\’t in fact reduce their poverty rates faster.

Why might the existence of high poverty rates make it harder to grow? Perhaps high poverty rates reflect the lack of a middle class, which in turn makes it harder for an economy to grow. Perhaps high poverty rates reflect a poorly-educated workforce, which means that investment in the country is unprofitable, which slows growth. Perhaps high poverty rates lead to poor health, which reduces the prospects for growth. Ravallion investigates whether factors schooling, life expectancy, and the price
of investment goods might provide a link from high initial poverty to the lack of reductions in poverty, but doesn\’t find statistical connections.  Understanding why the poorest countries have no greater success in reducing their poverty rates remains a good research topic.