Public Higher Education Gets Less State and Local Support

The association of State Higher Education Executive Officers has published their report on \”State Higher Education Finance FY 2011.\” The basic story is rising enrollments in public institutions of higher education, but falling per-student support.

The blue bars in the figure show educational appropriations for public higher education [per full-time equivalent student, adjusted for inflation. The support starts relatively high at $8,156 per student in 1987), sags in the early 1990s to $7,054 in 1993, rises again in the late 1990s and early 2000s as high as $8,316 in 2001, drops off in to $6,875 in 2005, rises to $7,488 in 2008, and now has dropped off to $6,290 in 2011.

Meanwhile, tuition revenue per full-time student is gradually rising. Overall, it rises from $2,422 in 1986 to $4,774 in 2011.

And over these 25 years, the number of full-time equivalent students in public higher education has risen from about 7 million back in 1986 to almost 12 million in 2011.

Put these together, and here\’s tuition as a share of public education total revenue, rising from 23.2% back in 1986 to 43.3% at present.

This pattern may be here to stay. As the report states: \”In the past decade these two recessions and the larger macro-economic challenges facing the United States have created what some are calling the “new normal” for state funding for public higher education and other public services. In the “new normal” retirement and health care costs simultaneously drive up the cost of higher education, and compete with education for limited public resources. The “new normal” no longer expects to see a recovery of state support for higher education such as occurred repeatedly in the last half of the 20th century. The “new normal” expects students and their families to continue to make increasingly greater financial sacrifices in order to complete a postsecondary education. The “new normal” expects schools and colleges to find ways of increasing productivity and absorb ever-larger budget cuts, while increasing degree production without, we hope, compromising quality.\”

I would add only a couple of thoughts:

1) Almost everyone believes, or claims to believe, that the economic future of the United States is intertwined with building greater human capital. But that isn\’t reflected in our spending choices. I\’d be the first to say that spending isn\’t everything–but it\’s something! Here\’s a post from July 19, 2011, on \”How the U.S. Has Come Back to the Pack in Higher Education.\” The U.S. used to be the world leader in share of population going to higher education, but no longer.

2) The main budgetary mechanism for encouraging additional higher education is using student loans. This avoids adding to direct spending for higher education, but places a greater share of the risk of not completing a degree, or not having the degree lead to a well-paid job, on the student. Also, public higher education isn\’t expanding fast enough to absorb those who want to try college, so many of those who receive these loans are headed to the for-profit educational system. Here\’s a February 23, 2012, post on \”For-Profit Higher Education.\”

Minnesota: Paying More Federal Taxes, Receiving Less Federal Spending

Here\’s an op-ed column of mine that appeared in the (Minneapolis) Star Tribune on Sunday, March 18. I\’ll put the opening paragraphs here, and all of it below the fold:

\”Minnesota taxes: More blessed to give?\”
\”Minnesota pays its fair share — and then some — in federal taxes, while federal spending here is decidedly below average. Why the imbalance?\”

The great state of Minnesota rides in a lifeboat with 49 other states, tossed by the wind and waves of global politics and the global economy.

States vary in many ways — population, size of the state economy, age distribution, industry mix, geography. No one should expect that they will all make the same contribution to keeping the lifeboat afloat.

But still, it\’s eyebrow-raising to discover that Minnesota is one of the states consistently putting a lot more into the federal budget than it gets back. That\’s the message when you compare federal taxes paid by residents and businesses within each state with federal spending in each state.

\”Minnesota taxes: More blessed to give?\”
\”Minnesota pays its fair share — and then some — in federal taxes, while federal spending here is decidedly below average. Why the imbalance?\”
Article by: TIMOTHY TAYLOR

The great state of Minnesota rides in a lifeboat with 49 other states, tossed by the wind and waves of global politics and the global economy.
States vary in many ways — population, size of the state economy, age distribution, industry mix, geography. No one should expect that they will all make the same contribution to keeping the lifeboat afloat.
But still, it\’s eyebrow-raising to discover that Minnesota is one of the states consistently putting a lot more into the federal budget than it gets back. That\’s the message when you compare federal taxes paid by residents and businesses within each state with federal spending in each state.
The most recent data seems to be for 2009. The U.S. Census Bureau, in its Consolidated Federal Funds report, breaks down domestic federal spending to the state level.
It includes government payments to individuals, procurement, grants, and the salaries and wages of federal employees. Minnesota received $45.7 billion in federal spending in 2009.
On the tax side, the Internal Revenue Service Data Book for 2009 reports that gross federal taxes collected from Minnesota in 2009 were $67.6 billion.
This includes all federal taxes: individual and corporate income taxes; payroll taxes for Social Security and Medicare, and estate, gift and excise taxes. Minnesota has an above-average per capita income, and so it pays more than average in federal taxes.
Do the math: In 2009, Minnesota received about 68 cents in federal spending for every $1 paid in federal taxes. Putting the tax and spending numbers in per-capita terms is especially striking.
For the United States as a whole, federal spending was $10,395 per person in 2009.
For Minnesota, federal spending was $8,676 per person — about 16 percent below the average.
For the United States as a whole, federal taxes paid were $7,690 per person in 2009.
In Minnesota, federal taxes paid were $12,763 per person — about 66 percent above average. (Of course, the U.S. government had an enormous budget deficit in 2009, so the average spending per person far exceeded the average taxes per person.)
This general pattern has held for a number of years. Analysts at the Tax Foundation calculated that if you rank states on the basis of federal taxes paid per capita, Minnesota ranked from 11th to 15th over the years from 2001 to 2005. But when states are ranked on the basis of federal spending per capita, Minnesota ranked from 45th to 48th.
In the Tax Foundation\’s combined rankings — that is, federal spending received per dollar of federal taxes paid — Minnesota ranked from 45th to 48th over the 2001-2005 time period.
Writers at the Economist magazine performed similar research last summer. By their calculations, over the 20-period from 1990 to 2009, this gap between federal taxes paid by Minnesotans and federal spending received by Minnesotans added up to the equivalent of two years worth of gross state product for Minnesota.
By this measure, Minnesota ranked 49th among the states over this time in federal spending received relative to federal taxes paid, ahead of only Delaware.
What leads to a situation where a state is consistently sending more to the federal government than it is receiving back? Or vice versa?
Along with Minnesota and Delaware, other states with a habit of paying more in federal taxes than they receive in federal spending include New Jersey, Illinois, Connecticut, New York and New Hampshire.
States typically at the other end of the range, with a pattern of more federal spending received than taxes paid, include New Mexico, Mississippi, West Virginia, Alabama, North Dakota and Alaska.
No one explanation applies equally to all of these states, but following are four main factors.
Income. States with a pattern of paying more to the federal government than they receive are all above-average in income. Also, Delaware, New York and Illinois are all places with large numbers of major corporate headquarters, thus boosting the corporate taxes from those states.
Poverty. States with fewer people below the poverty line have less need for federal support through antipoverty programs like food stamps or Medicaid. Minnesota ranks fourth among states for lowest poverty rate. New Hampshire, Delaware, New Jersey and Connecticut all rank in the top 10 for lowest poverty rate. On the other side, Mississippi, New Mexico, Alabama and West Virginia all rank among the top eight in states with the highest poverty levels.
Elderly. A large proportion of elderly means a group whose members typically have lower incomes and are receiving Social Security and Medicare benefits. Minnesota ranks 40th among states in share of population older than 65. West Virginia and North Dakota are in the top five.
Defense. Spending on defense is quite unevenly distributed across states. Some Minnesota companies have defense contracts, but the state does not have large military bases, national research laboratories or the truly enormous defense contractors. Defense spending in 2009 averaged $1,753 per person for the United States as a whole; in Minnesota, federal defense spending averaged $786 per person.
Of course, it would be silly to argue that every state should receive just as much in federal spending as it contributed in tax revenue. The United States is more than the sum of 50 states.
Even if Minnesota doesn\’t have large military bases and ports, people here benefit from national defense spending.
If someone spends their working life in Minnesota, then retires and draws Social Security in New Mexico, it doesn\’t seem unfair in the least. States with higher incomes should pay more in taxes, and states with higher poverty should receive more federal support.
Nonetheless, Minnesota\’s status as a state that consistently pays more to the federal government than it receives should make us all ponder.
Higher federal spending on national defense, antipoverty programs, transportation, health insurance and more will typically lead to a situation in which Minnesota taxpayers will be paying more to support those programs than Minnesotans are going to receive in direct spending.
Federal tax cuts, by the same logic, will tend to disproportionately benefit Minnesota; federal tax increases will disproportionately burden Minnesota.
In a number of cases, a narrowly Minnesota perspective suggests that it would be more in our state\’s collective interest to pursue state-level taxing and spending policies, rather than supporting national policies in which Minnesota\’s federal tax dollars will exceed federal spending within Minnesota.
As an American, I don\’t advocate a purely Minnesota-centric perspective on federal spending and taxes. But as a Minnesotan, I find myself paraphrasing the sentiments of Tevye from the old musical \”Fiddler on the Roof\”:
\”Of course, there\’s no shame in being consistently near the bottom of state rankings comparing federal spending received to federal taxes paid.
\”But it\’s no great honor, either.\”
———-
Timothy Taylor is managing editor of the Journal of Economic Perspectives, based at Macalester College in St. Paul. He blogs at conversableeconomist.blogspot.com.

The Food Stamp Explosion

 For starters, Food Stamps have a new name. The 2008 farm bill changed the name to the Supplemental Nutrition Assistance Program, or SNAP. But whatever the name, enrollment rose from 17.3 million in 2001 to 46.2 million in October 2011. In the March 2012 issue of Amber Waves, published by the U.S. Department of Agriculture,  Margaret Andrews and David Smallwood ask: \”What’s Behind the Rise in SNAP Participation?\”


 What\’s perhaps less expected in the graph is that Food Stamp enrollment was rising steadily from 2001 up through 2006, although unemployment rates were low and falling during much of that time. The authors trace much of this change to changes in federal rules making it easier for people to apply, and easier for states to certify to the federal government that the benefits are being targeted. In addition, SNAP benefit levels were increased both in the 2008 farm bill and in the 2009 \”stimulus\” legislation, making it more attractive to apply. Here\’s a graph showing the maximum SNAP benefit for a household of four and the average benefit.

When you look at the numbers, Food Stamps play what may be a surprisingly large role in America\’s social safety net for the poor. Total spending on Food Stamps in 2011 was about $78 billion. According to the Center on Budget and Policy Priorities, \”Roughly 93 percent of SNAP benefits go to households with incomes below the poverty line, and 55 percent go to households with incomes below half of the poverty line …\”

For comparison, federal expenditures through the Earned Income Tax Credit were about $56 billion in 2011. As another comparison, total spending on Temporary Assistance for Needy Families (TANF), what is what most people mean by \”welfare,\” was about $33 billion in combined federal and state spending in 2010. In many states, SNAP far outstrips TANF in the level of support it provides for low-income families.

Top Marginal Tax Rates: 1958 vs. 2009

Top marginal income tax rates used to be much higher back in the 1950s and 1960s. How much revenue did those higher tax rates actually collect? Daniel Baneman and Jim Nunns address that question in a short report,\”Income Tax Paid at Each Tax Rate, 1958-2009,\” published by the Tax Policy Center last October.

 For starters, take a look at the statutory tax brackets for 1958 and 2009. The The tax brackets are adjusted for inflation, so the horizontal axis is constant 2009 dollars. The top statutory tax rate in 2009 was 35%; back in 1958, it was about 90%.  Marginal income tax rates are lower across the income distribution in 2009. In addition, the top marginal tax rate occurs much lower in the income distribution in 2009 than it did in 1958.

How many households actually paid these rates? Here\’s a figure showing the share of taxpayers facing different marginal tax rates. At the bottom, across this time period, roughly 20% of all tax returns owed no tax, and so faced a marginal tax rate of zero percent. Back in 1958, the most common marginal tax brackets faced by taxpayers were in the 16-28% category; since the mid-1980s, the most common marginal tax rate faced by taxpayers has been the 1-16% category. Clearly, a very small proportion of taxpayers actually faced the very highest marginal tax rates back 1958. It\’s interesting to note how the share of taxpayers facing higher marginal rates expanded substantially in the 1970s, probably due in large part to \”bracket creep\”–that is, tax brackets at that time didn\’t increase with the rate of inflation, so as wages were driven up by inflation, you were pushed into higher tax brackets even though real income had not increased.    

How much revenue was raised by these high marginal tax rates? Although the highest marginal tax rates applied to a tiny share of taxpayers, marginal tax rates above 39.7% collected more than 10% of income tax revenue back in the late 1950s. It\’s interesting to note that the share of income tax revenue collected by those in the top brackets for 2009–that is, the 29-35% category, is larger than the rate collected by all marginal tax brackets above 29% back in the 1960s.

A few quick thoughts:

1) Perhaps it goes without saying, but there\’s no reason to think that 1958 was the high point of social wisdom when it comes to tax policy. In addition, the economy has evolved considerably since 1958: talent and tasks are probably more mobile, and methods of categorizing income in ways that affect tax burdens have become more sophisticated. Also, the distribution of income has become much more unequal in recent decades, and so arguments over the appropriate share of taxes to be paid by those in the top income groups have evolved as well.

2) Raising tax rates on those with the highest incomes would raise significant funds, but nowhere near enough to solve America\’s fiscal woes. Baneman and Nunns offer this rough illustrative estimate: \”If taxable income in the top bracket in 2007 had been taxed at an average rate of 49 percent, income tax liabilities (before credits) would have been $78 billion (6.7 percent of total pre-credit liabilities) higher, taking into account likely taxpayer behavioral responses to the rate increase.\” The behavioral response they assume is that every 10% rise in tax rates causes taxable income to fall by 2.5%.
 

3) If one wants to use the 1958 example as a precedent, it would be fair to point out that the lowest-bracket income tax rates are a fairly new development, as of the mid-1980s. One could also use the example of 1959 to argue that many more taxpayers in the broad range of lower- and middle-incomes should face marginal federal tax rates in the range of 16-28%.

4) If the goal is to raise more tax revenue from those with high incomes, higher tax rates are not the only method of doing so. For example, one could limit various tax deductions that apply with greatest force to those high up in the income brackets. One could also look at ways in which the tax code lets those with high incomes pay lower rates, like the lower tax rates for capital gains and on tax-free investments like state and local bonds.

What Should Banks Be Allowed To Do?

Charles Morris offers a nice overview of the course of bank regulation in the last century or so in \”What Should Banks Be Allowed To Do?\” It appears in the Fourth Quarter 2011 issue of the Economic Review published by the Federal Reserve Bank of Kansas City.

For me, the article serves two useful purposes. First,  it\’s a reminder of why bank deregulation in the 1980s and 1990s wasn\’t some clever ploy by the financial-sector lobbyists, but was absolutely necessary given the evolution of the industry at that time. To be sure, the deregulation could have been carried out in different ways, posing different risks, but some kind of deregulation was unavoidable. Second, it makes the case for limiting what banks are allowed to so. I\’m completely persuaded that the proposed reform would make the banking sector safer and with less risk of needing a bailout, but I\’m less sure that the reform would make the financial sector as a whole safer. Let me say a bit more about each of these, drawing heavily on Morris\’s exposition.

The banking sector as it emerged from the 1930s had five characteristics salient for the discussion here: 1) it was overseen by bank regulators for safety; 2) it had access to a public safety net of emergency loans from the Fed and deposit insurance; 3) it was forbidden to go into other financial areas like investment banking, securities dealing, or insurance; 4) it faced legal limits on the insurance it could pay on deposits; and 5) it faced geographic restrictions on branching across state lines and within states. In short, it was an industry that was shielded from competition, limited in what it could do, and heavily regulated.

In the 1970s, the wheels began to come off this wagon. Those who wished to save money began to seek out investment options like mutual funds, including money market mutual funds, and insurance companies. Banks were limited in the interest rate they could pay, and inflation was high. Banks began to hemorrage deposits. Those who wished to borrow money found other options, too. They borrowed through commercial paper, through high-yield bonds, and through securitized markets including mortgage-backed securities and asset-backed securities. Separate finance companies made car loans and loans for retail purchases. Other companies financed trade receivables.

In short, both the savers and the borrowers were migrating outside the banking industry. Instead, the process of financial intermediation between savers and borrowers was happening outside the banking industry, in what came to be called the \”shadow banking\” sector.  If the banks had not been deregulated and allowed to compete in this new financial sector–at least in some ways–the banks themselves would have shrunk dramatically and a very large part of the U.S. saving and borrowing would have passed completely outside the purview of the bank regulators.

As banks were allowed to compete across the financial sector more broadly, starting in the 1980s, the industry began to consolidate. This made some sense: when banks were allowed to open branches across states and across state lines, for example, not as many small banks were needed. But the top banks not only became very large, but an ever-growing share of their assets were outside the traditional business of banking. Here\’s how Morris summarizes how the industry evolved (footnote omitted):

\”Technological improvements, interstate banking, and the GLB [Graham-Leach-Bliley] Act resulted in fewer banks and a much more concentrated banking industry, with the largest BHCs [bank holding companies] ultimately engaging in more varied and nontraditional activities. For example, the number of banks fell from about 12,500 in 1990 to about 6,400 in 2011. The share of industry assets held by the 10 largest BHCs rose from about 25 percent in 1990 to about 45 percent in 1997 (just before the GLB Act) and to almost 70 percent in 2011. The share of loans and deposits of the top 10 BHCs
also rose sharply (Table 1). In addition, only four of the 10 largest BHCs that existed before the passage of the GLB Act remain today (Citigroup, JPMorgan Chase, Bank of America, and Wells Fargo), with those four BHCs having acquired five of the other top 10 BHCs.

\”Table 1 also shows how the activities of the 10 largest BHCs have changed in the past 14 years. In 1997, the share of banking assets relative to total assets at these companies was 87 percent, with only one company having a share less than 80 percent. Today, the share of banking assets is 58 percent, with only two BHCs having a share greater than 80 percent.\”

Morris\’s diagnosis and proposed solution are straightforward. Bank holding companies have gotten into too many risky financial activities, and so should be restrained. But Morris is also clearly and sensibly aware that just trying to turn back the clock to 1930s-style regulated banking isn\’t possible. That toothpaste is out of the tube. He suggests that banks be allowed to pursue three areas of business:

  • Commercial banking—deposit taking and lending to individuals and businesses.
  • Investment banking—underwriting securities (stocks and bonds) and providing advisory services.
  • Asset and wealth management—managing assets for individuals and institutions.

 Conversely, Morris argues that banks be barred from three other areas:

  • Dealing and market making—intermediating securities, money market instruments, and over-the-counter derivatives transactions for customers.
  • Brokerage services—brokering for retail and institutional investors, including hedge funds (prime brokerage).
  • Proprietary trading—trading for an organization’s own account and owning hedge and private equity funds.

The key distinction here for Morris is that underwriting securities and providing advice are largely fee-based services. They don\’t involve putting much of a bank\’s capital at risk. Dealing, market-making, hedge funds, and private equity all involve taking risks with capital that are harder both for the institution to understand and for regulators to monitor.

Morris\’s proposal is certainly sensible enough, but it does leave me with a couple of questions. First, if banks were holding lots of mortgage loans, as they clearly could be under Morris\’s proposal, then they would have been vulnerable to a meltdown of housing prices like the one that has occurred. Thus, it\’s not clear to me that anything in this proposal would have limited the very aggressive home lending that occurred or the price meltdown afterward. Indeed, the sort of limited banks Morris advocates might in some ways have been relatively even more exposed to losses in the housing market.

Second, Morris\’s proposal, like all \”narrow bank\” proposals, would clearly make the banking sector safer. But one of the disturbing facts about the financial troubles of 2008 was that it wasn\’t just commercial banks that were deemed to be systemically important to the U.S. economy: it was also investment banks like Bear Stearns, money market funds, insurance companies like AIG, brokers that sell Treasury bonds, and others. Focusing on banking is all very well, but the shadow banking sector and the potential risks that it poses aren\’t going away.

The Mundane Cost Obstacle to Nuclear Power

I\’ve long believed that the main problems with expanding nuclear power related to health and safety concerns: for example, the small chance of a plant malfunctioning, along with issues related to waste disposal and possible links between nuclear power technology and weapons technology. But Lucas Davis argues persuasively in \”Prospects for Nuclear Power\” in the Winter 2012 issue of my own Journal of Economic Perspectives, which is freely available on-line courtesy of the American Economic Association, that I\’ve been assuming too much. Here\’s Davis (citations omitted):

\”Nuclear power has long been controversial because of concerns about nuclear accidents, storage of spent fuel, and about how the spread of nuclear power might raise risks of the proliferation of nuclear weapons. These concerns are real and important. However, emphasizing these concerns implicitly suggests that unless these issues are taken into account, nuclear power would otherwise be cost effective compared to other forms of electricity generation. This implication is unwarranted. Throughout the history of nuclear power, a key challenge has been the high cost of construction for nuclear plants. Construction costs are high enough that it becomes difficult to make an economic argument for nuclear even before incorporating these external factors. This is particularly true in countries like the United States where recent technological advances have dramatically increased the availability of natural gas. The chairman of one of the largest U.S. nuclear companies recently said that
his company would not break ground on a new nuclear plant until the price of natural gas was more than double today’s level and carbon emissions cost $25 per ton. This comment summarizes the current economics of nuclear power pretty well. Yes, there is a certain confluence of factors that could make
nuclear power a viable economic option. Otherwise, a nuclear power renaissance seems unlikely.\”

The argument from Davis is complemented by some other recent discussions of nuclear power. The
Federation of American Scientists has a report out on  The Future of Nuclear Power in the United States, edited by Charles D. Ferguson and Frank A. Settle. The most recent issue of the Economist magazine (March 10) has a 14-page cover story on \”Nuclear Power: The Dream that Failed.\”
Finally, a Report to the Secretary of Energy by the Blue Ribbon Commission on America\’s Energy Future was released in late January.

Here is some basic background from Davis in his JEP article. The first figure shows nuclear power plants under construction around the world. Notice that the plants under construction in the United States and western Europe dropped off to near-zero in the 1990s. The recent spike in plants under construction is driven by the \”other\” category, which is largely China, but it remains to be seen how many of these plants will end up being completed.

The next figure shows rising costs of constructing nuclear power plants in the United States. The costs are per kilowatt-hour of capacity, and so adjusted for size. The costs are also adjusted for inflation.

Finally, this figure shows the slowdown in construction times–for example, plants started in the 1960s were completed in 8.6 years while those completed in the 1970s took 14.1 years. Moreover, there was growing uncertainty as to whether a nuclear power plants would be completed: 89% of plants announced in the 1960s were completed, compared with only 25% of those announced in the 1970s being completed. 

Of course, it\’s not possible to separate cleanly the safety concerns over nuclear power from these cost issues: additional safety precautions–and the accompanying paperwork–are part of what drives up costs. But perhaps the more fundamental story here is that technological progress in nuclear power hasn\’t been increasing fast enough to assuage concerns about safety and to drive down costs. Stephen Maloney digs into this in some detail in Chapter 2 of the FAS report, \”A Critical Examination of Nuclear Power\’s Costs.\”

\”Since the nuclear industry’s inception more than 50 years ago, its forecasts for costs have been consistently unreliable. The “first generation” plants, comprising both prototype reactors and the standard designs of the 1950s-1960s, failed to live up to promised economics. This trend continued with the construction of Generation II plants completed in the 1970s, which make up the present nuclear fleet.

\”First, the total costs were far higher than for coal-generated electricity. In particular, the capital cost of nuclear plants built through 1980 were, on average, 50 percent higher than comparably-sized coal-fired plants, adjusting for inflation and including backfits to meet Clean Air Act standards. Second, there were extraordinary cost escalations over the original low cost promises. Nuclear plant construction costs escalated approximately 24 percent per calendar year compared to 6 percent annual escalation for coal plants. Third, the economies of scale expected were not achieved in the Generation II designs. The scale-up of nuclear plants brought less than half the economic efficiencies projected.

\”In addition, over 120 nuclear units, approximately half the reactors ordered, were never started or cancelled. The total write-offs were more than $15 billion in nominal dollars. … In the late 1970s, the Atomic Industrial Forum (AIF), predecessor to the Nuclear Energy Institute, identified the main drivers of unmet expectations as growing understanding of nuclear accident hazards, failure of regulatory standardization policies, and increased documentation standards to ensure as-built plants actually met
safety standards. The combined effects doubled the quantities of materials, equipment, and labor needed, and tripled the magnitude of the engineering effort for building a nuclear power plant.\”

Of course, it\’s possible to sketch business and technological scenarios under which nuclear power plants of the future use simpler, safer designs, which combine with economies of scale in production to drive down costs. But such predictions haven\’t held true over the history of nuclear power, and they don\’t seem to be holding true recently, either. Here\’s one of many examples, from Maloney:

\”In June 2006, a consortium of companies announced plans to build two more reactors at the South Texas Project site for an estimated cost of $5.2 billion. NRG, the lead company, made history by becoming the first company to file an application with the NRC. CPS Energy, a municipal utility, was one of its partners. In October 2007, CPS Energy’s board approved $206 million for preliminary design and engineering. In June 2009, NRG revised the estimate to $10 billion for the two reactors, including finance charges. A few weeks later, this estimate rose to $13 billion, including finance charges. Later that year, the estimate reached $18.2 billion …\” Cost overruns of similar magnitude aren\’t just a U.S. phenomenon; for example, they also have occurred at recent nuclear power projects in France and in Finland.

To be sure, there are promising new nuclear technologies out there. One hot topic is small modular reactors, discussed both in the Economist article and by Daniel Ingersoll in Chapter 10 of the FAS report. But at some point, a degree of skepticism seems appropriate. The Economist has a wonderful quotation from Admiral Hyman Rickover, who drove the process that created America\’s nuclear submarines, and commented back in the 1950s:

\”An academic reactor or reactor plant almost always has the following basic characteristics: (1) It is simple. (2) It is small. (3) It is cheap. (4) It is light. (5) It can be built very quickly. (6) It is very flexible in purpose. (7) Very little development will be required. It will use off-the-shelf components. (8) The reactor is in the study phase. It is not being built now. On the other hand a practical reactor can be distinguished by the following characteristics: (1) It is being built now. (2) It is behind schedule. (3) It requires an immense amount of development on apparently trivial items. (4) It is very expensive. (5) It takes a long time to build because of its engineering development problems. (6) It is large. (7) It is heavy. (8) It is complicated.\”

Finally, arguments over appropriate disposal of nuclear waste will surely continue. For an overview of these issues, a useful starting point is the Report to the Secretary of Energy by the Blue Ribbon Commission on America\’s Energy Future that was released in late January. Personally, I didn\’t find the Commission report to be especially encouraging about resolving these issues. For example, the first recommendation is to start a process of encouraging communities to volunteer for being nuclear waste disposal sites, which they think might take 15-20 years. Having just watched the argument over a possible repository at Yucca Mountain in Nevada run for 25 years, before the decision of the Obama administration to halt that process, this time frame seems optimistic. Of course, there are alternatives: consolidated storage facilities, and technologies for processing nuclear waste. But the alternatives aren\’t cost-free, either.

Nuclear power isn\’t going away. Plants that have been working well still have several decades to run, and the marginal costs of running them are now low. Additional nuclear power plants will be built in countries where the government makes it a priority, or perhaps in some settings where other sources of power are extremely high cost. But as the U.S. enters what seems to be a time of cheap and plentiful natural gas, building a substantial number of new nuclear power plants in this country seems highly unlikely.

Small Firms and Job Creation

Most job creation comes from small firms–and so does most job destruction. The Congressional Budget Office has a useful summary of the evidence in a short March 2012 report, \”Small Firms, Employment, and Federal Policy.\”
Here are some basic facts about firm size and employment. In 2011, firms with more than 1,000 employees were 0.2% of all firms, but 38.6% of all private-sector employees. Conversely, firm with fewer than 19 employees make up 87.5% of all firms, but have 18.4% of total employees. Here\’s the detailed table.

The share of private sector employment by firm size has barely budged in recent decades. The share of employment in very small enterprises of 1-19 employees is down a few percentage points, and the share of employment at the largest employers of over 500 employees is up just a touch, but the overall stability of these patterns is remarkable.

Studies of the effect of small firms on employment often seem to reach mixed results. One reason becomes clear if you think about this figure for a moment. If small firms grow enough in size, they aren\’t small any more. So does that mean that small firms stop contributing to employment growth if they get larger? Conversely, imagine a large firm in a death spiral, bleeding employees until it becomes a smaller firm. Does this now become an example of a small firm that is losing employees? If people are laid off at big firms, and then try to start small companies, is this evidence of the dynamism of small firms–or of a sick economy?

The emphasis in recent studies of small firms and employment is in adjusting for the age of the firm.
Here\’s the CBO explanation (footnotes omitted):

\”It is widely believed that small firms promote job growth. In fact, small firms both create and eliminate far more jobs than large firms do. On balance, they account for a disproportionate share of net job growth—however, that greater net growth is driven primarily by the creation of new small firms, frequently referred to as start-ups, rather than by the expansion of mature small firms. …

\”Recent research, however, has found that it is young small firms, especially start-ups, that grow faster—and consequently create jobs at a higher rate—than either large firms or established small firms do. One study found that the smallest firms, those with between one and four employees, grew 4.7 percent faster than the largest firms, those with more than 10,000 employees. However, when the comparison is made between firms of the same age, small firms grow more slowly than large firms do.

\”Almost all firms start small. Many fail and, of those that do survive, most have no desire to expand beyond “small firm” status. Only a few grow substantially and become large firms. Thus, the faster average growth of young small firms is driven by the ambitions and successes of a fairly narrow set of start-up employers.\”

In other words, the issue isn\’t about small size, but rather about whether the firm is in a line of business that offers possibilities for dramatic expansion. Many small firms like certain retail stores or various personal and professional services, don\’t have much potential to expand substantially, and aren\’t really seeking to do so. I remember a venture capitalist, who would be expected to be sympathetic to good news about small firms, once telling me: \”The thing you need to remember about small firms is that a lot of what they do is sell to bigger firms.\” I\’m not sure that statement is true about small firms in general, but I think it was true in terms of what he was looking for: that is, firms that were currently small but had potential to link to a much wider market.

International Poverty: Progress and a Puzzle

Shaohua Chen and Martin Ravallion at the World Bank have prepared a \”briefing note\” with good news: global poverty rates are dropping.  They write: \”That means that 1.29 billion people in 2008 lived below $1.25 a day, as compared to 1.94 billion in 1981. 2.47 billion people in 2008 consumed less than $2 a day, as compared to 2.59 billion in 1981.\”

The idea of measuring poverty as $1.25 per day or $2 per day may shock some Americans. After all, In contrast, in the United States the poverty threshold in 2011 for a three-person family, single parent with two children, was $18,123, which works out to about $16.50 per person per day. But as the report explains: \”$1.25 is the average of the national poverty lines found in the poorest 10-20 countries.
Using this line, poverty in the world as a whole is being judged by what “poverty” mean
in the world’s poorest countries. Naturally, better off countries tend to have higher poverty lines than this frugal standard. $2 a day is the median poverty line for all developing countries.\”

The reduction in poverty rates reaches across all regions of the developing world. Not surprisingly, much of the most rapid reduction of poverty has come from China. As the figure shows, it used to be that if you included China with the rest of the developed world, it raised the overall poverty rate. But now, if you include China with the rest of the developed world, it reduces the overall poverty rate. Nonetheless, 173 million people in China remain below the $1.25 poverty line. As the World Bank describes it:

\”Looking back to the early 1980s, East Asia was the region with the highest incidence of poverty in the world, with 77% living below $1.25 a day in 1981. By 2008 this had fallen to 14%. In China alone, 662 million fewer people living in poverty by the $1.25 standard, though progress in China has been uneven over time. In 2008, 13% (173 million people) of China’s population still lived below $1.25 a day. In the developing world outside China, the $1.25 poverty rate has fallen from 41% to 25% over 1981-2008, though not enough to bring down the total number of poor, which was around 1.1 billion in both 1981 and 2008, although rising in the 1980s and ‘90s, then falling since 1999 …\”

The reduction in poverty rates is clearly good news, but the pattern of reduction in poverty rates across countries poses a puzzle that Martin Ravallion raises in \”Why Don’t We See Poverty Convergence?\” in the February 2012 issue of the American Economic Review. The article isn\’t freely available on-line, although many academics will have access through their libraries. He sets up the discussion of the puzzle this way\”

\”Two prominent stylized facts about economic development are that there is an advantage of backwardness, such that in a comparison of two otherwise similar countries the one with the lower initial mean income will tend to see the higher rate of economic growth, and that there is an advantage of growth, whereby a higher mean income tends to come with a lower incidence of absolute poverty. Past empirical support for both stylized facts has almost invariably assumed that the dynamic processes for growth and poverty reduction do not depend directly on the initial level of poverty. Under that assumption, the two stylized facts imply that we should see poverty convergence: countries starting out with a high incidence of absolute poverty should enjoy a higher subsequent growth rate in mean consumption and (hence) a higher proportionate rate of poverty reduction. That poses a puzzle. The data on poverty measures over time for 90 developing countries assembled for this article reveal little or no sign of poverty convergence.\”

Here is Ravallion\’s figure to illustrate the point. The horizontal axis has poverty rates for 90 countries, mostly from the 1980s and 1990s as data became available. The vertical axis shows the decline in poverty rates from the start of the data up to 2005. Notice that the best-fit line doesn\’t show that countries which started from higher levels of poverty have larger reductions in poverty: if anything, the relationship goes a bit the other way.

Ravallion puts it this way (citation omitted): \”The overall poverty rate of the developing world has been falling since at least 1980, but the proportionate rate of decline has been no higher in its poorest countries.\” This finding suggests that while poverty rates are diminishing over time, there is no particular reason based on past patterns to expect that the poverty rates will fall more quickly where poverty is greatest. Ravallion offers the clear implication: There do often seem to be \”advantages of backwardness,\” which relatively poorer countries can take advantage of global knowledge and market to reach a faster growth rate, but there is apparently also a drag of high poverty rates, in which the existence of a high poverty rate makes it harder for a country to reduce its poverty rate further. These factors tend to offset each other, and as a result, the poorer countries don\’t in fact reduce their poverty rates faster.

Why might the existence of high poverty rates make it harder to grow? Perhaps high poverty rates reflect the lack of a middle class, which in turn makes it harder for an economy to grow. Perhaps high poverty rates reflect a poorly-educated workforce, which means that investment in the country is unprofitable, which slows growth. Perhaps high poverty rates lead to poor health, which reduces the prospects for growth. Ravallion investigates whether factors schooling, life expectancy, and the price
of investment goods might provide a link from high initial poverty to the lack of reductions in poverty, but doesn\’t find statistical connections.  Understanding why the poorest countries have no greater success in reducing their poverty rates remains a good research topic.

The U.S. and Europe: Productivity Puzzles and Information Technology

From the 1970s into the 1990s, productivity levels in Europe, as measured by output per hour, was converging with the United States. But since the mid-1990s, the U.S. productivity lead has been expanding.

Why? Nicholas Bloom, Raffaella Sadun, and John Van Reenen try to answer that question in \”Americans Do IT Better: US Multinationals and the Productivity Miracle,\” which appears in the February 2012 issue of the American Economic Review. The article isn\’t freely available on-line, although many in academia will have access through a library subscription. Part of the answer is that the resurgence of U.S. productivity since about 1995 has been driven by industries that either make or use information and communications technology. As measured by the total stock of information technology capital divided by hours worked, the U.S. economy has opened a larger lead over Europe.

These patterns are fairly well-known in the economics literature on determinants of economic growth. For example, in the Winter 2008 issue of my own Journal of Economic Perspectives, which is freely available on-line courtesy of the American Economic Association, Dale W. Jorgenson, Mun S. Ho, and Kevin J. Stiroh offer \”A Retrospective Look at the U.S. Productivity Growth Resurgence,\” which discusses how the resurgence in U.S. productivity growth in the mid-1990s was first led by productivity increases in the information-technology-producing sector, and then was led by productivity increases in industries that made intensive use of information technology. In that same issue, Bart van Ark, Mary O’Mahony, and Marcel P. Timmer discuss \”The Productivity Gap Between Europe and the United States: Trends and Causes.\” They discuss how European productivity was converging with U.S. levels, until it started diverging, and point to a possible role for information technology, along with issues related to the role of information technology and differences in labor and product market regulation.

In their just-published article, Bloom, Sadun, and  Van Reenen pose the question, and their answer, this way (footnotes omitted):

\”Given the common availability of IT throughout the world at broadly similar prices, it is a major puzzle why these IT related productivity effects have not been more widespread in Europe. There are at least two broad classes of explanation for this puzzle. First, there may be some “natural advantage” to being located in the United States, enabling firms to make better use of the opportunity that comes from rapidly falling IT prices. These natural advantages could be tougher product market competition, lower regulation, better access to risk capital, more educated or younger workers, larger market size, greater geographical space, or a host of other factors. A second class of explanations stresses that it is not the US environment per se that matters but rather the way that US firms
are managed that enables better exploitation of IT (“the US management hypothesis”). These explanations are not mutually exclusive. …\”

\”Nevertheless, one straightforward way to test whether the US management hypothesis has any validity is to examine the IT performance of US owned organizations in a European environment. If US multinationals partially transfer their business models to their overseas affiliates—and a walk into McDonald’s or Starbucks anywhere in Europe suggests that this is not an unreasonable assumption—then analyzing the IT performance of US multinational establishments in Europe should be informative. Finding a systematically better use of IT by American firms outside the United States suggests that we should take the US management hypothesis seriously. …

We report that foreign affiliates of US multinationals appear to obtain higher productivity than non-US multinationals (and domestic firms) from their IT capital and are also more IT intensive. This is true in both the UK establishment-level dataset and the European firm-level dataset. … Using our new international management practices dataset, we then show that American firms have higher scores on “people management” practices defined in terms of promotions, rewards, hiring, and firing. This holds true for both domestically based US firms as well as US multinationals operating in Europe. Using our European firm-level panel, we find these management practices account for most of the higher output elasticity of IT of US firms. This appears to be because people management
practices enable US firms to better exploit IT.\”

They carefully add in a footnote: \”It is plausible that higher scores reflect “better” management, but we do not assume this. All we claim is that American firms have different people management practices than European firms, and these are complementary with IT.\”

This work is part of a longer-term project of these authors, which seeks to spell out what is meant by \”good management,\” and then to use survey data to figure out where \”good management is being practiced. In a Winter 2010 article in my own journal,  Bloom and Van Reenen offer a useful overview of this work in \”Why Do Management Practices Differ across Firms and Countries?\”

They describe how they try to measure good management, using 18 different categories \”which focuses on aspects of management like systematic performance monitoring, setting appropriate targets, and providing incentives for good performance.\” Their group conducts interviews with middle-level corporate managers, and ranks firms on a scale of 1-5 in each of these 18 categories. One of their findings is that average management scores for U.S. firms are the highest in the world.

It sometimes seems to me, reading the news, that American firms are managed by time-serving functionaries who run the gamut from myopic to venal. But like most Americans, my direct experience with companies operating in the rest of the world is almost nonexistent. By international standards, managers of U.S. firms as a group may well be among the best in the world.

A Third Kind of Unemployment?

Economists typically think of unemployment as falling into two categories. There is \”cyclical\” unemployment, which is the unemployment that occurs because of a recession. And there is \”structural\” unemployment–sometimes called the \”natural rate of unemployment\” or the NAIRU for \”nonaccelerating inflation rate of unemployment.\” This is the rate of unemployment that would arise in a dynamic labor market even if there was no recession, as firms expand and contract and people move between jobs. The level of structural unemployment will be influenced by factors that influence the incentives of people to seek out jobs (like the costs of mobility between jobs and the structure of unemployment, welfare, and disability benefits) and the incentives of businesses to hire (including rules affecting the costs of business expansion, rules affecting what firms must provide to employees, and even rule affecting the costs of firing employees, if necessary, later on).

Inconveniently, the unemployment that the United States is currently experiencing doesn\’t fit neatly into either of the two conventional categories.

After all, the recession officially ended in June 2009, according to the Business Cycle Dating Committee of the National Bureau of Economic Research.  However, the unemployment rate has been above 8% since February 2009, and in a February 2012 report on \”Understanding and Responding to Persistently High Unemployment,\” the Congressional Budget Office is forecasting that it will remain above 8% until 2014.

In a conventional economic framework, it\’s not clear how to make sense \”cyclical\” unemployment that persists for four or five years after the recession is over. However, the CBO and other forecaster have been predicting all along that the unemployment rate will eventually drop as the aftereffects of the Great Recession wear off, and in that sense it doesn\’t seem like natural or structural unemployment, either.

It\’s not clear what to call this persistent jobless recovery unemployment. \”Lethargic\” unemployment? \”Sluggish\” unemployment? \”Torpid\” unemployment? \”Tar-pit\” unemployment?

However you label it, this this is now the third consecutive \”jobless recovery,\” where it has taken a substantial time after the end of the recession for unemployment rates to come back down. It used to be that unemployment rates peaked almost right at the end of the recession, and the steadily dropped. Here\’s a graph of unemployment rates from the ever-useful FRED website of the St. Louis Fed. Periods of recession are shaded.

For example, when the 1974-75 recession ended in March 1975, unemployment was 8.6%. It climbed just a bit higher, to 9% in May 1975, but then fell steadily and by May 1978 was at 5.9%. Or look at the aftermath of the \”back-to-back\” recessions of 1980-81 and 1982. When the recession ended in November 1982, the unemployment rate was also peaking at 10.8%. It then dropped steadily and was down to 7.2% by November 1984 and 5.9% by September 1987.

In the jobless recoveries since then, the pattern has been different. When the 1990-91 recession ended in March 1991, the unemployment rate was 6.8%. But the unemployment rate kept rising, peaking more than a year later at 7.8% in June 1992. it wasn\’t until August 1993, more than two years after the economy had resumed growing, that unemployment rates had fallen back to the 6.8% rate that prevailed at the official end of the recession.

A similar pattern arose after the 2001 recession. At the end of that recession in November 2001, the unemployment rate was 5.5%. But then the unemployment rate kept rising, peaking out at 6.3% in June 2003. It wasn\’t until July 2004 that unemployment rates declined back to the 5.5% that had prevailed at the end of the 2001 recession.

In the most recent recession, unemployment was at 9.5% in June 2009, when the Great Recession officially ended. The official unemployment rate peaked at 10% in October 2009, and has drifted down since then. But in this recovery, the unemployment rate is an underestimate of labor market woes, because the official unemployment rate only counts those who are \”in the labor force,\” meaning that they are out of work but looking for a job. Those who have given up looking, or who are working part-time but would like full-time work, aren\’t counted as unemployed. The last few years have seen a dramatic drop in the \”labor force participation rate,\” that is, the share of adults who are \”in the labor force.\” This rate rose substantially from the 1970s through the 1990s as a greater share of women entered the (paid) labor force. But with job prospects so poor, it has been dropping off. 

The February 2012 CBO report describes the disconnect from the official unemployment rate to a broader appraisal of the U.S. labor market this way: \”The rate of unemployment in the United States has exceeded 8 percent since February 2009, making the past three years the longest stretch of high unemployment in this country since the Great Depression. Moreover, the Congressional Budget Office (CBO) projects that the unemployment rate will remain above 8 percent until 2014. The official unemployment rate excludes those individuals who would like to work but have not searched
for a job in the past four weeks as well as those who are working part-time but would prefer full-time work; if those people were counted among the unemployed, the unemployment rate in January 2012 would have been about 15 percent.\”

Our public discussions of what to do about these persistently high rates of lethargic or torpic unemployment have been unfortunately locked into the two older categories of cyclical and structural unemployment.

For example, some argue that if only the federal government had enacted an extra $1 trillion or so in fiscal stimulus, probably backed by a Federal Reserve willing to carry out another \”quantitative easing\” by printing money to finance the Treasury bonds for this stimulus, then the economy and the unemployment rate would be recovering much more quickly. But the federal government is in the process of running its four largest annual deficits since World War II from 2009 to 2012. The Fed is planning to hold the benchmark federal funds interest rate near zero percent for six years (!), while also engaging in $2 trillion of quantitative easing. The amount of countercyclical macroeconomic policy has been massive, and I have a hard time believing that just another boost would have fixed everything.

While I in general supported the countercyclical macroeconomic policies taken during the Great Recession (with some reservations about the details), it seems to me that countercyclical macroeconomic policy is like taking aspirin when you have a bad case of flu–or if you prefer a more extreme metaphor when talking about an unemployment rate that may exceed 8% for 7-8 years, like an athlete taking a cortisone shot for an injury before playing in the big game. Such steps can be worth taking, and they can sometimes even modestly help the healing process, but they are palliative, not curative. Also, the CBO offers a reminder that while more fiscal stimulus could help the economy in the short-term, it will injure the economy over the long run unless it is counterbalanced by a way of holding down government debt over time.

\”Despite the near-term economic benefits, such actions would add to the already large projected budget deficits that would exist under current policies, either immediately or over time. Unless other actions were taken to reverse the accumulation of government debt, the nation’s output and people’s income would ultimately be lower than they otherwise would have been. To boost the economy in the near term while seeking to achieve long-term fiscal sustainability, a combination of policies would be required: changes in taxes and spending that would increase the deficit now but reduce it later in the decade.\”

But the standard policy agenda for dealing with structural unemployment doesn\’t seem particularly on-point just now, either. Sure, it would be useful to encourage mobility between jobs and to rethink how regulatory and other policies affect incentives to work and to hire. But while this kind of rethinking is always useful, it\’s not clear that it addresses the reality of high unemployment here and now.

We need a convincing theory of this third kind of unemployment–sluggish unemployment, tar-pit unemployment–and an associated sense of what policies are useful for addressing it. Firms as a group have high profits and strong cash reserves, but they are not seeing it as worthwhile to raise hiring substantially, preferring instead to focus on getting more productivity from the existing workforce. Are there ways to reduce the costs and risks that firms face when thinking about hiring? Many households are struggling with outsized debt burdens, including those who have mortgages that are larger than the value of their home. Are there policy levers to help them move past their debt burdens?

Long-term unemployment is very high. CBO writes: \”[T]he share of unemployed people looking
for work for more than six months—referred to as the long-term unemployed—topped 40 percent in December 2009 for the first time since 1948, when such data began to be collected; it has remained above that level ever since.\” What do we know about getting the long-term unemployed back into the labor force?  Are there ways to encourage greater mobility of people between jobs, perhaps by spreading more information about job opportunities, making it easier for employers to verify skills of potential employees, or encouraging both greater geographic mobility and mobility across sectors of the economy?

Tolstoy famously started Anna Karenina with the comment: \”All happy families are alike; each unhappy family is unhappy in its own way.\” Each unhappy recession is unhappy its own way, too–and the Great Recession is quite different from previous post-war U.S. recessions. It needs some fresh thinking about policies to address what has happened.