U.S. Public Schools: Unequal Spending Across States

Mark Dixon of the U.S. Census Bureau has authored  Public Education Finances: 2011. The headline finding is that in 2011, per capita spending on K-12 education declined. The drop was a small one, only about 0.4%, but it\’s also the first and only drop in the last 40 years. It\’s yet another symptom of how brutal the Great Recession and its aftermath have been for state and local finances.

But the other pattern that especially jumped out at me from the report is the difference in what is spent per student across states–a difference that has been large for a long time. Here\’s a map:

For the U.S. as a whole, average public school K-12 current spending per student was $10,560 in 2011, with 61% of that going to \”instruction,\” and 35% going to \”support services.\” 

But four jurisdictions–New York, Wyoming, Alaska, and the District of Columbia–spend more than $16,000 per K-12 student, with New York leading the way at $19,076 per student. Conversely, Idaho, Utah, Arizona, Oklahoma,and Mississippi spend less than $8,000 per student, with Utah having the lowest tally at $6,212 per student. That is, New York spends on average three times as much per K-12 student as does Utah.

It would of course be a vulgar error to assume that high spending is what\’s important in K-12 education, when what actually matters is how the students achieve. Spending on education will reflect factors like the local cost of living, the extent of poverty and special needs in the state, and the legacy of past negotiations with the teachers\’ unions. Still, the discrepancies in what states spend is striking.

Preschool for At-Risk Children, Yes; Universal Preschool, Maybe Not

In January, I blogged in \”Head Start is Failing Its Test\” about a high-quality study being done by the U.S. Department of Health and Human Services, which found: \”In summary, there were initial positive impacts from having access to Head Start, but by the end of 3rd grade there were very few impacts found for either cohort in any of the four domains of cognitive, social-emotional, health and parenting practices. The few impacts that were found did not show a clear pattern of favorable or unfavorable impacts for children.\”

In the most recent issue of the Journal of Economic Perspectives,  Greg J. Duncan and Katherine Magnuson offer a broader and modestly more  hopeful angle in their paper, \”Investing in Preschool Programs,\”(Like all papers in JEP back to the first issue in 1987, this is freely available on-line compliments of the American Economic Association. Full disclosure: I\’ve been the Managing Editor of JEP since the journal started in 1987.)

 One of the main difficulties in evaluating preschool programs is that just comparing children in such  programs and not in such programs won\’t be a fair approach. After all, families differ in many ways, some of them not easily measureable, and these differences need to be taken into account. Thus, a preferred method is an \”experimental\” approach, in which out of a group of families, some are randomly assigned to the preschool program and some are not. Of course, one can then look at observable characteristics to see if the assignment was really random: that is, families with children randomly chosen to be enrolled in the preschool program should have the same average income, education level, employment level, proportion of single parents, and so on, compared with the families whose children were not randomly enrolled in the program. But random enrollment also offers a plausible way of adjusting for unobservable differences in families, like the level of emphasis that the family puts on school or persistence or work.

Duncan and Magnuson focus on the studies of preschool done with these kinds of random assignment experimental methods, and identify 84 such studies over the last half-century (including the Head Start study mentioned earlier. They summarize the results of these 84 studies in this figure.

The horizontal axis of the figure shows the year the study was done. The vertical axis shows the size of the effect found in the study, measured as the average of the cognitive and achievement gains found for those attending preschool. For perspective, the achievement gap between black and white .children entering kindergarten is about one standard deviation. Circles with a black outline are Head Start programs. The gains are measured at the end of the preschool enrollment period–before such gains have had a chance to fade out. From this figure and their surrounding discussion, here are some key points:

1) There is evidence of short-term gains from preschool programs.
2) The average level of gains from such programs seems to be falling over time (as shown by the downward-sloping line). This finding is distressing, because one would hope that such programs could become more effective over time. But a likely reason is that parents (especially the mothers) of children in these preschool program are not as deeply disadvantaged in recent years as they were back in the 1960s, when levels of literacy, health, and income were much lower. Because the parents better educated and have higher income, the gains from preschool are smaller.
3) Academic gains from preschool programs tend to fade out. As they write: \”Most early childhood education studies that have tracked children beyond the end of the program treatment find that effects on test scores fade over time.\”
4) However, some very long-term studies have found that although measures of cognitive and achievement gains fade, there are often still observable behavioral effects like improved high school graduation rates, lower rates of teen parenthood, and lower rates of criminal behavior. This finding creates a puzzle, because even with batteries of questionnaires and tests for the children, their teachers, their parents, and others, researchers have not yet been able to measure what it is in preschool programs that might produce these positive long-term results.

So where does all of this mean for proposals for universal preschool, like the \”Preschool for All\” initiative announced by President Obama along with his 2014 proposed budget? I know that for some people, the word \”universal\” is a sort of talisman that carries echoes of equality and justice. But it\’s perhaps worth noting as a starting point that although preschool attendance has been increasing over time, it is far from universal now for any income group. Duncan and Magnuson provide this figure showing the share of 3 and 4 year-olds enrolled in preschool, divided up by income level. Preschool attendance has risen over the decades but it now represents about half of all children. In discussions of \”universal\” preschool, it\’s never quite clear to me whether universality is intended to apply to all income levels, or whether only the children of low-income families are to \”universally\” attend pre-school.


A more detailed look at the studies also suggests that there is no one-size-fits-all universal answer to the best experiences for 3 and 4 year-olds. For example, some of these studies suggest that children from low-income families benefit more than children from high-income families. Often the academic benefits of preschool are higher for girls, but the behavioral benefits are higher for boys. Children with very low birthweights often do not benefit much from preschool, perhaps because low birthweights can be a signal of reduced capabilities. Preschool programs vary quite widely how their teachers are trained, in how their classroom balance academic and emotional needs of children, and in their curriculum.  There are alternative early interventions that may be more productive for some children: for example, home visitation for high-risk, first-time mothers, or interventions for children living in families with documented domestic violence

The word \”universal\” is no guarantee of quality. After all, we have \”universal\” K-12 schooling, but that certainly doesn\’t mean that all children across the United States are in high-quality or even roughly equivalent schools. \”Universal\” is certain to be costly. There is a strong case for further experiments with an array of early childhood interventions, including preschool, to have a better sense of what works, and why. Right now, the sad truth is that there is so much we don\’t know. 

Spending on America\’s Pets

Steve Henderson of the U.S. Census Bureau pulls together data from the Consumer Expenditure Survey and looks at \”Spending on pets.\” He writes (footnotes omitted):

\”Nearly three-quarters of U.S. households own pets. There are about 218 million pets in the United States, not counting several million fish. … Americans spent approximately $61.4 billion in  total on their pets in 2011. On average, each U.S. household spent just over $500 on pets. This amounts to about 1 percent of total spending per year for the average household. … Expenditures on pets include pet food, pet purchases, supplies and medicine, pet services, and veterinarian services.\”

Two comparisons leaped to mind when I saw these figures. First, all through last year there were stories about high spending of the 2012 U.S. election campaign. The Federal Election Commissions reports that about $7 billion was spent. But to put the number in the context of dogs and cats, America spent about nine times as much on pet care as it did on choosing all its federally elections in 2012.

Second, the World Bank often uses a poverty line of $1.25/day in consumption to measure deep destitution in developing countries. Nearly one-third of the population of South Asia and nearly half the population of Africa has a consumption level below this line.  Over 365 days in a year, $1.25/day works out to $456.25.  Thus, the average U.S. household spends more on pets than the poverty line for humans in the developing world. And the statistics don\’t include the fact that pets live rent-free.

In his short essay, Henderson offers some less combustible comparisons. He writes: 

  • \”In 2011, households spent more on their pets annually than they spent on alcohol ($456), residential landline phone bills ($381), or men and boys clothing ($404).
  • \”Average household spending on pet food alone was $183 in 2011. This was more than the amount spent on candy ($87), bread ($107), chicken ($124), cereal ($175), or reading materials ($115).
  • \”Even when spending at restaurants dropped during the recent recession (December 2007–June 2009), spending on pet food stayed constant. (See chart 1.)\”

Average annual expenditures on pet food and food away from home, 2007–2011

I am deeply aware of all the grim news for the U.S. economy during the last five years. But when the average household is spending $500/year on pets, it\’s a reminder that America\’s average standard of living remains quite high.

Global Urbanization and the Governance Challenge


The overall focus of the Global Monitoring Report, jointly published by the World Bank and the IMF, is on how the world is doing in achieving the \”Millennium Development Goals\” or MDGs. But the annual reports also look at some particular angle or topic more closely, and the 2013 GMR focuses on urbanization and its linkages to economic development. Here\’s a taste of some main themes:
\”Urbanization matters. In the past two decades, developing countries have urbanized rapidly, with the number of people living in urban settlements rising from about 1.5 billion in 1990 to 3.6 billion (more than half of the world’s population) in 2011. … Nearly 50 percent of the population in developing countries was urban in 2011, compared with less than 30 percent in the 1980s. Urban dwellers are expected to double between 2000 and 2030, from 2 billion to 4 billion people, and the number of

Chinese urban dwellers will increase from more than 622 million today to over 1 billion in 2030. This trend is not unique to developing countries—today’s high-income countries underwent the same transformation in the 20th century. In fact, virtually no country has graduated to a high-income status without urbanizing, and urbanization rates above 70 percent are typically found in high-income
countries.\”

Overall, the report emphasizes that the global shift to urbanization works together with many development goals.
\”Cities and towns are hubs of prosperity—more than 80 percent of global economic activity is produced in cities by just over half of the world’s population. Economic agglomeration increases productivity, which in turn attracts more firms and creates better-paying jobs. Urbanization provides higher incomes for workers than they would earn on a farm, and it generates further opportunities to move up the income ladder. Between 1990 and 2008, rural poverty rates were, without exception, higher than urban poverty rates … 
\”Location remains important at all stages of development, but it matters less in rich countries than in poor ones. Estimates from more than 100 Living Standard Surveys indicate that households in the most prosperous areas of developing countries such as Brazil, Bulgaria, Ghana, Indonesia, Morocco, and Sri Lanka have an average consumption almost 75 percent higher than that of similar households in the lagging areas of these countries. In comparison, the disparity is less than 25 percent in developed countries such as Canada, Japan, and the United States.\”

The gains from urbanization are not just in terms of income, but also in terms of other development goals like access to clean water, education, and health care.

 \”For example, on average, the cost of providing piped water is $0.70–$0.80 per cubic meter in urban areas compared with $2 in sparsely populated areas. South Asia and Sub-Saharan Africa have the largest rural-urban disparities in all service delivery indicators. The poor often pay the highest price for the water they consume while having the lowest consumption levels. For example, in Niger, the average price per cubic meter of water is CFAF 182 for piped water from a network, CFAF 534 from a public fountain, and CFAF 926 from a vendor. And poor access to basic infrastructure disproportionately affects rural women, because they perform most of the domestic chores and
often walk long distances to reach clean water. …
\”In 2010, 96 percent of the urban population but 81 percent of the rural population in developing countries had access to safe drinking water. Disparities in access to basic sanitation were greater: 80 percent of urban residents but only 50 percent of rural residents had access to a toilet. Schooling and health care can also be delivered with economies of scale in dense environments, close to where people actually live.\”

But of course, the wave of urbanization is also generating urban slums, which are different than rural poverty,  but pose difficult challenges of their own.

\”Slums are the urban face of poverty and emerge when cities are unable to meet the demand for basic services and to supply the expected jobs. A likely 1 billion people live in urban slums in developing countries, and their numbers are projected to grow by nearly 500 million between now and 2020. Slums are growing the fastest in Sub-Saharan Africa, southeastern Asia, and western Asia. Currently, 62 percent of Africa’s urban population lives in slums. …Those in slums lack ownership of property, or even  clearly legal rentals; poor services of many kinds; informal employment only, lack of access to credit and services.\”

 The issue of slums is in some ways a governance challenge: not just how to provide services in the present to slum-dwellers, but how to establish an appropriate institutional and physical infrastructure.
\”Urbanization is largely a natural process, driven by the opportunities cities offer. Unregulated markets are unlikely to get densities right, however, and spontaneous development of cities can create negative side effects such as congestion or, alternatively, excessive sprawl. The consequences are pollution and inefficiencies. Without coordinated actions, cities will lack the proper investments to benefit from positive externalities generated by increased density. Higher-quality construction material and more sophisticated buildings are required to support greater densities, but if these higher
costs must be fully internalized by firms and households, underinvestment is the result. In addition, complementary physical infrastructure is critical: roads, drainage, street lighting, electricity, water, and sewerage, together with policing, waste disposal, and health care. While a market-driven process could possibly gradually increase densities through shifting land values over time, the long-lived and lumpy nature of urban investment often inhibits such a process. A city’s physical structures, once established, may remain in place for more than 150 years….  Under current trends, the expected increases in the urban population in the developing world will be accompanied
by a tripling in the built-up area of cities, from 200,000 to 600,000 square kilometers.\”

The challenge of thinking about how to address urban slums cannot be overstated. The slums already involve 1 billion people, a total that will grow by over 50% in the next decade. The built-up area of cities in the developing world is likely to triple. And major decisions about physical structure can have a life expectancy of more than a century. There is a delicate balance between needing an overall master plan and needing a lot of a flexibility for evolution and change over time. Even after deciding what to do, financing poses challenges of its own: in particular, how can developing countries establish ways for that poor people  can afford what they need, but also pay what they can afford. Then actually getting projects built is often quite difficult, too.  Urban areas will differ, of course, in their locations and needs. But there is a great need for rules of thumb, and ideas about best practices, that can offer some guidance for the urban developments that lie ahead.

A Defense of the Financial Sector

The financial sector needs some defenders, and John H. Cochrane steps forward with a bracing essay,
\”Finance: Function Matters, Not Size,\” in a symposium in the Spring 2013 issue of the Journal of Economic Perspectives.  (Full disclosure: I\’ve worked as Managing Editor of the JEP since 1987.) Here, I\’ll list some of the main points that I took away from Cochrane\’s essay in boldface type, with quotations from the article following.

Economists have been arguing for a half-century that active portfolio management isn\’t worth the fees paid for it (for example, see my post from yesterday). But when high-fee active portfolio management has persisted for decades in the face of such criticism, perhaps it\’s the critics who should be wondering if they are correct. 

\”High-fee active management and underlying active trading have been deplored by academic finance for a generation. … It seems the average investor should save 60 basis points a year and just buy a passive index such as Vanguard’s Total Stock Market Portfolio. It seems that the stock pickers should do something more productive, like drive cabs. Active management and its fees seem like a total private, and social, waste. Yet this hallowed view—and its antithesis—do not completely make sense. After all, active management and fees have survived 40 years of efficient-market disdain. Economists who would dismiss “people are stupid” as an “explanation” for a pricing anomaly that lasts 40 years surely cannot use the same “explanation” for the persistence of active management.\”

There are lots of inefficiencies in financial markets that can be exploited, at least for a time, to make profits.

\”But the last 20 years of finance research is as clear as empirical research in economics can be: There is alpha relative to the market portfolio—there are strategies that deliver average returns larger than the covariation of their returns with the market portfolio justifies—lots of it, and all over the place. … Examples of such strategies include value (stocks with low market value relative to accounting book value), momentum (stocks that have risen in the previous year), stocks of companies that repurchase shares, stocks of companies with accounting measures of high expected earnings, and stocks with low betas. The “carry trade” in maturities, currencies and, credit—buy high-yield securities, sell low-yield securities—and writing options, especially the “disaster insurance” of out-of-the-money put options, all generate alpha. Expected returns on the market and most of the anomaly strategies vary predictably over time, implying profitable dynamic trading strategies.\”

Highly sophisticated investors pay for active management of their financial assets, and apparently believe they are getting a good deal.  

\”Delegating active management and paying large fees is common and increasing among large, completely unconstrained, and very sophisticated investors. For example, the Harvard endowment was in 2012 about two-thirds externally managed by fee investors and was 30 percent invested in “private equity” and “absolute return,” largely meaning hedge funds. The University of Chicago endowment is similarly invested  in private equity and “absolute return.” Apparently, whatever qualms some of its curmudgeonly faculty express about alphas, fees, and active management are not shared by the endowment. … Why have these decision procedures become standard practice? Vague reference to “agency problems” and “naiveté” seem unpersuasive. Harvard’s endowment was overseen by a high-powered board, including its president Larry Summers, possibly the least naive investor on the planet. The picture that Summers and his board, or the high-powered talent on Chicago’s Investment Committee are simply too naive to demand passive investing, or that they really want the endowments to be invested in the Vanguard total market index, but some “agency problem” with the managers they hire and fire with alacrity prevents that outcome from happening, simply does not wash.\”

The existence of financial bubbles suggests that markets are inefficient, too. But many of those who are most insistent that financial markets are inefficient often shy away from the logical implication that if the market is inefficient, it might benefit from additional trading. 

\”The common complaints “the financial crisis proves markets aren’t efficient,” or that tech and mortgages represented “bubbles,” are at heart complaints that there was not enough active information-based trading. All a more “efficient” market could have done is to crash sooner, by better expressing the pessimist’s views. … If information is not incorporated into market prices and to such an extent that simple strategies with big alphas can be published in the Journal of Finance, there are not enough arbitrageurs. If asset prices fall in “fire sales,” only to rebound later, there are not enough buyers following the fire trucks. If credit constraints are impeding the flow of capital, there is a social benefit to loosening those constraints.\”

Do we care about the size of the financial sector or the instability of the financial sector? (And no, they aren\’t the same thing.)

\”The increase in fees for residential loan origination is easily digested as the response to an increase in demand. The increase in housing demand may indeed not have been “socially optimal” (!). There are plenty of government policies and perhaps a few market dislocations to blame. But it doesn’t make much sense to criticize growth in the financial industry for responding to this increase in demand, whatever its source, or for passing along the subsidized credit—which was and remains the government’s explicit intention to increase—with the customary fee. … There was a lot of financial innovation in mortgage-backed securities, some of which notoriously exploded. But here again, whether we spend a bit of GDP filling out forms or paying fees is clearly the least of the social benefit and cost questions. The “shadow banking” system was prone to a textbook systemic run, which happened. This fragility, not the size or fraction of GDP, is the important issue.\”

We don\’t really understand the process of price discovery in financial markets, and as a result, passive investing may be less intuitively attractive on a second glance.

\”The fact staring us in the face is that “price discovery,” the process by which information becomes embedded in market prices, uses a lot of trading volume, and a lot of time, effort, and resources. And we are only beginning to understand it…. [P]erhaps we should work just a little harder before dismissing the hundreds of years of trading activity, and the entire existence of the New York Stock Exchange, Chicago Mercantile Exchange, and other markets, as monuments to human folly, or before advocating regulations such as transactions taxes —the perennial favorite answer in search of a question—to reduce trading volume whose size, function, and operation we do not understand. Are we sure that they should not be transactions subsidies? And before we deplore, it’s worth remembering just how crazy passive indexing sounds to any market participant. “What,” they might respond, “would you walk in to a wine store and say ‘I can’t tell good from bad, and the arbitrageurs are out in force. I sure won’t pay you 1 percent for recommendations. Just give me one of everything’?”\”

The important aspects of the financial sector that we don\’t understand are a good basis for research, but in the real world of political economy, they could well be a bad basis for additional regulation.

\”Surveying the current economic literature on these issues, it is certain that we
do not very well understand the price-discovery and trading mechanism, nor the
economic forces that allowed high-fee active management to survive so long.
Unless we adopt the arrogant view that what we don’t understand must be bad,
it is clearly far too early to make pronouncements such as “There is likely too much
high-cost, active asset management,” or “Society would be better off if the cost of
this management could be reduced.” Such statements are not supported by theory
or evidence. Nor is their not-so-subtle implication that resources devoted to greater
regulation—by politicians and regulators no less naive than current investors, no
less behaviorally-biased, armed with no better understanding than academic economists,
and with much larger agency problems and institutional constraints—will
improve matters.\”

Cochrane\’s paper is part of a five-paper symposium on \”The Growth of the Financial Sector\” in the Spring 2013 issue of the Journal of Economic Perspectives.

Economies of Scale in Asset Management: Who Benefits?

The total assets managed by domestic equity funds rose from $26 billion in 1980 to $3.5 trillion in 2010. Would you expect the expenses charged by such funds to rise in proportion to the amount that they manage? Or by less?

Burton G. Malkiel argues in \”Asset Management Fees and the Growth of Finance,\” in the Spring 2013 issue of my own Journal of Economic Perspectives,  that there should be considerable economies of scale in managing a stock portfolio. (Like all articles in JEP back to the first issue in 1987, it is freely available on-line compliments of the American Economic Association.) Malkiel writes:

\”There should be substantial economies of scale in asset management. It is no more costly to place an order for 20,000 shares of a particular stock than it is to order 10,000 shares. Brokerage commissions (which are usually set in a flat dollar amount per transaction, at least within broad ranges of transaction size) are likely to be similar for each purchase ticket, as are the “custodial fees” paid to the bank that holds the securities that are owned. The same annual report and similar filings  to the Securities and Exchange Commission are required whether the investment fund has $100 million in assets or $500 million. The due diligence required for the investment manager is no different for a large mutual fund than it is for a small one. Modern technology has fully automated such tasks as dividend collection, tax reporting, and client statements.\”

Malkiel also cites more rigorous academic studies that find economies of scale. But despite the more than 100-fold increase in share of assets under management for these funds, the average amount paid as expenses has not declined in three decades. Here\’s the table, showing an expense ratio of 66 basis points of assets under management in 1980, but 69.2 basis points as a share of assets under management in 2010.

As Malkiel writes: \”Surely, there had to be enormous economies of scale that could have been passed on to consumers, resulting in a lower cost of management as a percentage of total assets. But we will see below that the scale economies in asset management appear to have been entirely captured by
the asset managers. The same finding appears to hold for asset managers who cater to institutional investors.\”

The table shows some other interesting factors. Equity funds can either be \”actively managed,\” by those trying to anticipate where the market is headed, or \”passively managed,\” by index funds that seek only to replicate what happens in the market. In 1980, 99.7% of all stock market funds were actively managed; by 2010, 71% were actively managed. 

The expense ratios for passively managed funds are often very low, at 7 basis points or less. The expense ratios for actively managed funds alone (shown in the second column of the table) have actually risen from 66 basis points back in 1980 to over 90 basis points as a share of assets in the last decade or so. And remember, this is during a period when economies of scale should have been a force for driving down expenses as a share of assets!

Back in 1973, Burton Malkiel published the first edition of his classic A Random Walk Down Wall Street.  (I think the 9th edition came out a couple of years ago.) The book offers a readable and persuasive statement of the argument that stock prices are based on past information, and that they will rise and fall based on new information. Because new information is, by definition, not predictable (or else it would be part of past information!), stock prices will move up and down unpredictably. Malkiel has been making the case for 40 years that while actively managed equity funds  charge higher fees than passively managed funds, they do not on average have higher returns.  In this essay, he writes:

\”Clearly, one needs some active management to ensure that information is properly reflected in securities prices. Those professionals who act to exploit any differential—however small—between price and estimated value deserve to be compensated for their efforts. But it appears that the number of active managers and the costs they impose far exceed what is required to make our stock markets reasonably efficient, in the sense that no clear arbitrage opportunities remain unexploited. Worldwide, vast numbers of highly trained independent experts are expressing estimates of value each day. Outperforming the consensus of hundreds of thousands of professionals at the world’s major financial institutions is next to impossible, as it has been for decades. … The major inefficiency in financial markets today involves the market for investment advice, and poses the question of why investors continue to pay fees for asset management services that are so high.\”

I\’m sure there are people and institutions that can benefit from sophisticated investment advice that seek to hedge the specific risks they face while leaping to exploit the occasional profit opportunities provided by temporary anomalies in financial markets. But most average investors in actively managed funds are not following this pattern. They are following either their own gut reactions, or the gut reactions of an active portfolio manager, about what is likely to rise and fall. In doing so, they are paying much higher fees over what an index fund would have cost. Malkiel cites one study that found if the average mutual fund investor in actively managed funds had just bought and held a passive index fund from 2000 to 2011, rather than trying to chase every trend, that average investor would have increased return on investment by almost 2 percentage points per year–a gain of more than 20% over the decade.

Why Did the U.S. Financial Sector Grow?

It\’s widely known that the U.S. financial sector has grown substantially in recent years. But by how much? And in what specific areas? Robin Greenwood and David Scharfstein offer a useful breakdown in \”The Growth of Finance,\” which appears in the most recent (Spring 2013) issue of my own Journal of Economic Perspectives. Like all JEP papers from the most recent back to the first issue in 1987, it is freely available on-line compliments of the American Economic Association. Greenwood and Scharfstein write at the start: \”During the last 30 years, the financial services sector has grown enormously. This growth is apparent whether one measures the financial sector by its share of GDP, by the quantity of financial assets, by employment, or by average wages. At its peak in 2006, the financial services sector contributed 8.3 percent to US GDP, compared to 4.9 percent in 1980 and 2.8 percent in 1950.\”

But what is actually meant by \”the finance sector\”?  Here\’s a useful figure dividing the sector into three parts: securities, credit intermediation, and insurance.

In their discussion, they set aside the insurance part of the financial services sector. Of course, there are important issues in health insurance, liability insurance, and other types of insurance. But when people argue that \”the financial sector\” is too big, or that an over-expansion of the financial sector helped to bring on the Great Recession, they aren\’t referring to standard insurance markets. Greenwood and Scharfstein summarize the other two main sectors in this way:

\”The securities subsector … includes the activities typically associated with investment banks (such as Goldman Sachs) and asset management firms (such as Fidelity). These activities include securities trading and market making, securities underwriting, and asset management for individual and institutional investors. The credit intermediation industry performs the activities typically associated with traditional banking—lending to consumers and corporations, deposit taking, and processing financial transactions.\”

The authors turn over the available evidence on what happens within these sectors over time (and no, this sort of data is not easily available), and argue (citations omitted here and throughout):

\”Our main finding is that much of the growth of finance is associated with two activities: asset management and the provision of household credit. The value of financial assets under professional management grew dramatically, with the total fees charged to manage these assets growing at approximately the same pace. A large part of this growth came from the increase in the value of financial assets, which was itself driven largely by an increase in stock market valuations (such as the price/earnings multiples). There was also enormous growth in household credit, from 48 percent of GDP in 1980 to 99 percent in 2007. Most of this growth was in residential mortgages. Consumer debt (auto, credit card, and student loans) also grew, and a significant fraction of mortgage debt took the form of home equity lines used to fund consumption. The increase in household credit contributed to the growth of the financial sector mainly through fees on loan origination, underwriting of asset-backed securities, trading and management of fixed income products, and derivatives trading.\”

Their conclusions on these points are balanced, but lean toward the view that even if much of the growth in financial services has been productive, it went too far on the margin. They write:

\”Thus, any assessment of whether and in what ways society benefited from the growth of the financial sector depends in large part on an evaluation of professional asset management and the increase in household credit. In our view, the professionalization of asset management brought signififi cant benefits. The main benefit was that it facilitated an increase in financial market participation and diversification, which likely lowered the cost of capital to corporations. Young firms benefited in particular, both because they are more reliant on external financing and because their value depends more on the cost of capital. At the same time, the cost of professional asset management has been persistently high. While the high price encourages more active asset management, it may not result in the kind of active asset management that leads to more informative securities prices or better monitoring of management. It also generates economic rents that could draw more resources to the industry than is socially desirable.\”

\”While greater access to credit has arguably improved the ability of households to smooth consumption, it has also made it easier for many households to overinvest in housing and consume in excess of sustainable levels. This increase in credit was facilitated by the growth of “shadow banking,” whereby many different types of nonbank financial entities performed some of the essential functions of traditional banking, but in a less-stable way. The financial crisis that erupted late in 2007 and proved so costly to the economy was largely a crisis in shadow banking.\”

The Spring issue of the JEP actually includes four other papers with varying perspectives on the growth of the financial sector. In the next couple of days, I\’ll post about some of these very divergent views.

But as a prelude, I\’ll point out that most of the time, when economic activity grows in a certain area, those of us who believe in economic prosperity tend to view that growth as a good thing. If the U.S. car industry or computer industry racked up large sales, that would be viewed in a positive light. Clearly, many people feel differently about the financial sector. But is that negative reaction just a manifestation of the long-standing generalized prejudice against finance? After all, I\’m delighted that I can put my retirement savings into a no-load mutual fund, and that I don\’t have to try to construct and manage such a fund on my own. I\’m delighted when it\’s easy for me to get a mortgage.

It seems undeniable to me that excesses in the financial sector played a large role in the run-up to the Great Recession. But maybe the problem isn\’t the size of the financial sector, but rather its instability. After all, many of the proposals for a higher level of regulation will impose higher costs that in turn will tend to make the financial sector larger, not smaller.

What If You Aren\’t the Average College Student?


The offices of high school guidance counselors and directors of college admissions are full of statistics about how the average person with a college degree earns much more than the average person with a high school degree. But making any decision based on averages is a tricky business. After all, college itself is not a a single experience. There are a wide array of public and private schools, with different costs. There are a wide array of fields of study, with very different job prospects. Schools vary considerably according to what share of their students graduate. And even further, high school students are very different from each other. If you have been a 30th percentile high school student, it is perfectly reasonable to wonder whether you are going to be an average college student.  
Stephanie Owen and Isabel V. Sawhill draw upon a wide array of research on many of these questiosn in their essay, \”Should Everyone Go To College?\” written for the Center on Children and Families at the Brookings Institution. As they note at the start:

\”There is enormous variation in the so-called return to education depending on factors such as institution attended, field of study, whether a student graduates, and post-graduation occupation. While the average return to obtaining a college degree is clearly positive, we emphasize that it is not universally so. For certain schools, majors, occupations, and individuals, college may not be a smart investment. By telling all young people that they should go to college no matter what, we are actually doing some of them a disservice.\”

Here are a few of their figures that especially jumped out at me. First consider the average return on investment for a bachelor\’s degree from a range of schools, either more or less competitive in their admissions, and either public or private–where the public schools are less expensive and thus have a higher return. On average–and there\’s that word \”average\” again–the return to competitive public schools is more than twice as high as the return to noncompetitive non-for-profit private school.



In discussing school by school data, they write: \”[N]ot every bachelor’s degree is a smart investment. After attempting to account for in-state vs. out-of-state tuition, financial aid, graduation rates, years taken to graduate, wage inflation, and selection, nearly two hundred schools on the 2012 list have negative ROIs [return on investments]. Students may want to think twice about attending the Savannah College of Art and Design in Georgia or Jackson State University in Mississippi.\” 
One problem that seems especially underestimated to me is the issue of whether a student who enrolls as a freshman is likely to complete a degree. The wage payoffs from dropping out are not encouraging, but any loans taken out while still a student will linger on. Here is a figure showing the graduation rate after six years. At noncompetitive schools the average graduation rate is a frighteningly low 35%. And the bottom of the \”maximum/minimum\” lines show that schools vary widely on this dimension. Frankly, even if you find your plans are taking you to a college or university that is not very competitive, it\’s not hard to look up the graduation rate, and there\’s just no reason to choose an institution where only a third or less of the students will get a degree

 

ewer than 60 percent of students who enter four-year schools finish within six years, and for low-income students it’s even worse. 

Owen and Sawhill write: \”Again, the variation in this measure is huge. Just within Washington, D.C., for example, six-year graduation rates range from a near-universal 93 percent at Georgetown University to a dismal 19 percent at the University of D.C. Of course, these are very different institutions, and we might expect high-achieving students at an elite school like Georgetown to have higher completion rates than at a less competitive school like UDC. In fact, Frederick Hess and his colleagues at AEI have documented that the relationship between selectivity and completion is positive, echoing other work that suggests that students are more likely to succeed in and graduate from college when they attend more selective schools. At the most selective schools, 88 percent of students graduate within six years; at non-competitive schools, only 35 percent do.\”
Finally, within schools there is a choice of field of study. Those who major in science, engineering, math, or business are likely to do much better than those who focus on arts or education.  

Indeed, the wage premium for being in a science and technology industry can be more important than the gains from a four-year college degree. Owen and Sawhill write: \”Anthony Carnevale and his colleagues at the Georgetown Center on Education and the Workforce use similar methodology to the Census calculations but disaggregate even further, estimating median lifetime earnings for all education levels by occupation. They find that 14 percent of people with a high school diploma make at least as much as those with a bachelor’s degree, and 17 percent of people with a bachelor’s degree make more than those with a professional degree. The authors argue that much of this finding is explained by occupation. In every occupation category, more educated workers earn more.But, for example, someone working in a STEM job with only a high school diploma can expect to make more over a lifetime than someone with a bachelor’s degree working in education, community service and arts, sales and office work, health support, blue collar jobs, or personal services.\” 
The recommendations that follow from this kind of essay are straightforward. Look at public schools, for their lower cost. Look  at graduation rates, no matter what the selectivity of the institution. Even if math, business, computers and science are not your first love, or your second or third, it is foolish not to spend some of your time in college getting a basic grounding in at least some of these areas. 
For more on these issues, I recommend  the article by Christopher Avery and Sarah Turner in the Winter 2012 issue of my own Journal of Economic Perspectives: \”Student Loans: Do College Students Borrow Too Much—Or Not Enough?\” In my blog post about that article here, I wrote: 
\”About 60% of high school students go on to college. For the purposes of a quick-and-dirty estimate, let\’s say that it\’s the top 60% by academic qualifications. Thus, if you are at, say, the 70th percentile of your high school class, you are in the middle of those going on to college. Given that many of those who go on to college don\’t finish a degree, being at the 70th percentile of your high school class may mean that you can expect to be ranked in the bottom quarter of those who complete a college degree. Sure, some students will improve dramatically from high school to college, but it\’s a statistical fact that half of college graduates will be below the median, and one-fourth will be in the bottom quarter, and especially if you are advising a large number of high school students, it\’s unrealistic to tell each of them that that they can all end up in the upper part of the college distribution.\”
In fact, it\’s unrealistic for high school guidance counselors and college admissions officers to tell everyone that they can all be average. Statistically speaking, they can\’t.

Will Productivity Growth Leave Us Motherless?

Productivity growth is the main determinant of an overall rise in the standard of living. And whether U.S. productivity growth in the future will be higher, as it was in the second half of the 1990s and into the early 2000s, or lower, as it was during the 1970s and 1980s, is a live controversy.
The Spring 2013 issue of the International Productivity Monitor has a group of lively and readable arguments on both sides.

As a starting point, here\’s a summary figure from a recent Congressional Budget Office working paper by Robert Shackleton,  \”Total Factor Productivity Growth in Historical Perspective.\” 
Notice that productivity growth looks relatively high in the 1960s, then plummets around 1970 and, although the path is a bumpy one, stays relatively low until the mid-1990s. There is then a surge of productivity growth, which has since sagged. Shackleton projects that future productivity growth will sag a bit from its 1960-2010 average, but not by much. The symposium in International Productivity Monitor offers voices on either side of his prediction.

Martin Neil Baily, James Manyika, and Shalabh Gupta lead off with \”U.S. Productivity Growth: An Optimistic Perspective\” They write: 

\”[D]igital technology and the digital revolution are proceeding apace, and we also agree that this will eliminate many traditional jobs in manufacturing and elsewhere. But the offset is that innovation-led growth can createnew jobs, new lines of business and new profit opportunities. Indeed, we saw this in the 1990s when innovation-led growth created new products and services and expanded output and new technologies made productivity gains possible, even though many of these were concentrated in certain sectors. Perhaps even more important, today there are large sectors of the economy that have continued to lag behind in productivity growth, notably health care, education, and construction. Adopting best practices and taking advantage of existing technologies can yield substantial productivity gains for the economy. Another important opportunity lies in energy. New technologies have unlocked reserves of natural gas and oil buried deep below the surface and made it possible to extract these reserves at favorable prices. While we do not ignore the environmental challenges inherent in accessing these reserves, we judge that these can be overcome and that natural gas at low prices and a more stable and secure source of oil are becoming available, a revolution that will have a large impact on U.S. productivity and GDP growth.\”

Some of the innovations they emphasize include industrial robotics, 3D printing, the application of \”big data\” to product development, supply chains, and production; the \”internet of things\” in which low-cost sensors from inanimate objects are connected to the internet, allowing continuous adjustments as desired.

The pessimistic role is ably filled by Robert J. Gordon in his essay: \”U.S. Productivity Growth: The Slowdown Has Returned After a Temporary Revival.\” He readily admits that many new technologies are available to help manufacturing, but memorably says that “manufacturing is performing a magnificent ballet on a shrinking stage.\” In Gordon\’s view, the short interlude of faster productivity growth from about 1994-2002 is now over.

To emphasize that recent productivity developments just aren\’t all that large compared with historical developments, Gordon writes:

\”I have often posed the following set of choices. Option A is to keep everything invented up until ten years ago, including laptops, Google, Amazon, and Wikipedia, while also keeping running water and indoor toilets. Option B is to keep everything invented up until yesterday, including Facebook, iphones, and ipads, but give up running water and indoor toilets; one must go outside to take care of one’s needs; one must carry all the water for cooking, cleaning, and bathing in buckets and pails. Often audiences laugh when confronted with the choice between A and B, because the answer seems so obvious.

\”But running water and indoor toilets were not the only inventions between 1870 and 1970 that made it possible for U.S. labour productivity to grow at the 2.48 per cent rate … The list is endless – electric light, elevators that made possible the vertical city, electric machine tools and hand tools, central heating, air conditioning, the internal combustion engine that replaced the horse, commercial aviation, phonographs, motion pictures, radio, TV, and many others including fundamental medical inventions ranging from aspirin to penicillin. By comparison the computer revolution kick-started productivity growth between 1996 and 2004 for only eight years, compared to the 81 years propelled by the second Industrial Revolution of the late nineteenth century.\”

Gordon further notes that the U.S. economy faces severe challenges in the years ahead: an aging population, high and rising levels of government debt, high levels of inequality, and a lack of improvement in educational attainment. He writes: \” The United States reached an educational plateau more than 20 years ago. It is the only developed nation in which the 55-64 age group is as well-educated as the 25-34 age group. The United States has steadily slipped down the league table of post-secondary education completion and currently registers 15 percentage points lower than Canada.\”

At least some of this dispute revolves around whether one sees the boost to productivity growth from information technology as a stage that is now largely behind us, or as a long-running play in which we may have only seen the first of several acts. David M. Byrne, Stephen D. Oliner, an d Daniel E. Sichel tackle this question in their paper, \”\”Is the Information Technology Revolution Over?\”

\”Just as a long lag transpired from the development of the PC in the early 1980s to the subsequent pickup in labour productivity growth, there could be a lagged payoff from the development and diffusion of extensive connectivity, handheld devices, and ever-greater and cheaper computing power. In 1987, Robert Solow famously said “You see the computer revolution everywhere except in the productivity data.”… [C]computers comprised too small a share of the capital stock in 1987 to have made a large contribution to overall productivity growth. But, several years later, the imprint of the revolution became very evident. In a parallel vein, one could now say: “You see massive connectivity and ever-cheaper computing power everywhere but in the productivity data.” Subsequently, those contributions could become evident in aggregate data.\”

In a comment on their paper, Chad Syverson points out that the productivity growth in the early 20th century, following the spread of electrical power through the economy, had periods of faster and slower productivity growth. He writes:

\”To be clear, I do not interpret this as predicting that labour productivity growth must again accelerate in 2013. … Rather, I simply make the point that we have been here before: sluggish labour productivity growth at the beginning of the diffusion ofa general purpose technology (if one believes, as I do, that the 1890-1915 period for electrification is a reasonable analog to the 1970-1995 period for IT), a decade-long acceleration, and then another multi-year slowdown. In the electrification era, this was followed by another acceleration. Whether this will also occur for IT remains to be seen, but we know it has happened before. History shows that productivity growth driven by general purpose technologies can arrive in multiple waves; it need not simply arrive, give what it has, and fade away forever thereafter.\”

As I contemplated these various perspectives, I found myself thinking back to my early years of running the Journal of Economic Perspectives, and in particular to a paper by Zvi Griliches in the Fall 1988 issue called \”Productivity Puzzles and R&D:Another Nonexplanation.\”  (As with all JEP papers from the most recent issue back to the first, this paper is freely available on-line compliments of the American Economic Association.) Griliches makes the case that it\’s hard to trace the productivity slowdown starting in the early 1970s to changes in R&D spending, and then ends with these thoughts:

\”What is then the culprit? Why has productivity grown so slowly in the last decade or so? My prime suspect remains the rise in energy prices and its macro consequences. It is not just that many industries had to face new prices, change the way they used their factors of production, and scrap much of their now unprofitable capacity, but also a long worldwide recession induced by the fall in real wealth caused by OPEC, by the fall in aggregate demand caused by the governments trying to control the resulting inflation, and the subsequent fall in U.S. exports and the increase in import competition in the early 1980s as the result of rising dollar exchange rates. These factors combined together to produce one of the longest worldwide recessions and growth slowdowns from which the world may not yet have emerged. The resulting prolonged periods of capacity underutilization in many industries is the proximate cause of much of the observed declines and slowdowns in productivity growth. …\”

\”Of course, there may not be a single cause—one murderer. Perhaps it is more like the Murder on the Orient Express—they all did it! From the longer run point of view there are still lingering doubts about the crime itself: perhaps the 1970s were not so abnormal after all. Maybe it is the inexplicably high growth rates in the 1950s and early 1960s that are the real puzzle. This thought, however, is a sad one. I still hope that when the world finds its way out of this worldwide growth recession, we will find that productivity will bounce back, that it has not left us motherless forever.\”

Of course, the aftermath of the Great Recession is still upon us, and it is far more severe that the period in the mid-1980s that Griliches was discussing. It seems to me very difficult to draw any firm conclusions about future productivity growth given that economic output was pumped up by the housing boom in the mid-2000s before being crushed by the economic and financial crisis that followed, and its continuing legacy of stagnant growth.

That said, when I discuss future economic opportunities for the U.S., I tend to focus on four areas.

  • First, the continuing growth in information technologies. Moore\’s law has not yet slowed down, and the processing power of computer chips continues to double every couple of years.  
  • Second, I confess that I personally lack the imagination to think in any full way about what will be possible a decade or so from now, when computing power may have doubled another five times, and thus will be a multiple of 32 more powerful than current computing. But when I think about the hardware and software that will need to be installed in homes and businesses, and the potential linkages to health care, energy conservation, education, and entertainment, it seems to me that sustained growth remains quite possible. 
  • Third, the possibility of enormous reserves of natural gas at moderate prices has potentially enormous influences for the U.S. economy: it may be the flip side of how high energy prices rocked the U.S. economy in the 1970s and other times. 
  • Finally, the world economy seems poised for a period of faster-than-usual growth, led not only by China, India, and Brazil, but by a wide range of economies across Asia, Latin America, eastern Europe, and Africa. The U.S. economy is in many ways, by its institutions, connections, and culture, wonderfully positioned to benefit from facilitating and participating in this growth.

Of course, there are also any number of significant policy challenges in these issues, and in others. But my point is that we are not doomed to a future of low productivity growth. Instead, for better or worse, our economic future is in our own hands.

Mergers and Enforcement in 2012

Each year, the Federal Trade Commission and the Antitrust Division of the U.S. Department of Justice publishes a report required by the Hart-Scott-Rodino Act on merger activity the preceding year. The news from the 2012 report, which is here, is that there isn\’t much news. The number of mergers in 2012 look a lot like it did in 2011–that is, up from the depths of 2009–but not near the peak of 2007.

Under the Hart-Scott-Rodino legislation, all mergers above a certain size (about $68 million in 2012, rising to $71 million in 2013) are required to notify the antitrust authorities in advance when a merger is proposed. Here\’s a figure showing merger transactions reported over the last 10 years.

And here\’s a breakdown by size. There were 156 acquisitions in 2012 that exceeded $1 billion. The very small number of mergers below $50 million, of course, is because it is not required to report these.

After receiving notification of a proposed merger, the antitrust authorities can allow it to happen, or they can make a \”second request\” for more information. In 2012, of the 1400 or so proposed mergers, there were second requests in 49 cases–a little over 3%. This percentage may look remarkably low! But remember that companies thinking about a merger have good knowledge about what criteria have been used to judge mergers in the past, and so mergers that would obviously lead to a lot less competition don\’t get proposed in the first place. Also, remember that the job of the FTC and the U.S. Department of Justice is not to second-guess whether a merger is a wise and sensible business decision. In a market economy, the presumption is that businesses have the freedom to make investment decisions that may turn out to be foolish. Instead, the mission of the FTC is \”to prevent business practices that are anticompetitive, deceptive, or unfair to consumers.\” 

With the second request information in hand, the antitrust authorities can then decide whether to let the merger proceed as proposed, to propose that the merger can only be allowed to occur if the company takes some additional action like divesting certain parts, or if the merger should just be blocked outright. In 2012, the FTC brought 25 enforcement actions. None of them rise to the level of the epic lawsuits involving Microsoft, IBM, or AT&T. But they help to set the ground-rules and expectations that competition is important, and where it seems threatened–even on everyday products like sliced bread or sticky notes–the government will push back. Here\’s a summary of some of the main activities involving mergers in 2012.

\”One of the notable matters handled by the Division was United Technologies Corporation’s $18.4 billion acquisition of Goodrich Corporation. The transaction was the largest merger in the history of the aircraft industry. As originally proposed, the acquisition would have resulted in higher prices, less favorable contractual terms and less innovation for several critical aircraft components. The Division challenged the merger in U.S. district court, and the subsequent settlement required UTC to divest assets used in the production of electrical power systems and aircraft engine control systems. … In addition to UTC, the Division challenged a number of mergers that would have had a direct effect on the pocketbooks of U.S. consumers. The Division challenged, and reached pro-competitive settlements, in mergers involving sliced bread (United States v. Grupo Bimbo, et al.), electricity (United States v. Exelon Corporation, et al.), health insurance (United States v. Humana Inc., et al.) and parking services (United States v. Standard Parking Corporation, et al.). Additionally, 3M Co. abandoned its proposed $550 million acquisition of Avery Dennison Corp.’s Office and Consumer Products Group, its closest competitor in the sale of adhesive-backed labels and sticky notes, after the
Division informed the companies that it would file a lawsuit to block the deal. The transaction would have substantially lessened competition in the sale of labels and sticky notes, resulting in higher prices and reduced innovation for products that millions of American consumers use every day.\”