In Hume’s spirit, I will attempt to serve as an ambassador from my world of economics, and help in “finding topics of conversation fit for the entertainment of rational creatures.”

Furman Reflects on Eight Years of Economic Policy-Making

Jason Furman was an economic policy insider through all eight years of the Obama administration, with \”oughly the first half of them as Deputy Director of the National Economic Council and the second half of them as Chairman of the Council of Economic Advisers.\” In the Arnold C. Harberger Distinguished Lecture on Economic Development at the UCLA Burkle Center for International Relations, Furman reflects on \”The Role of Economists in Economic Policymaking\” (April 27, 2017).

Furman has a pleasantly wry and discursive tone, with lots of anecdotes that mix together stories of success and failure. At least for readers like me, it\’s a tone that carries far more appeal and persuasive power than feverish oratory set to the sound of trumpets. For example, here are some of Furman\’s thoughts on the four things that economists have to offer when the answer to a policy question is not known.

The first is just describing the data. Describing the data does not tell you what caused what. It does not tell you what is the right policy or what is the wrong policy. But it can help you at least figure out what questions you should be asking, what areas you should be looking at to solve those questions, and what you can do about them. The data can be complicated. The government released two different measures of economic growth in the fourth quarter of 2016, one was 1.0 percent and one as 2.1 percent. The government also released two different measures of job growth in March 2017, 98,000 jobs and 472,000 jobs. Moreover, for the first quarter of this year the data tells a divergent story—with strong employment growth and strong “soft data”, like surveys of confidence, but much weaker GDP growth and weaker “hard data”, like actual sales numbers. I have spent a lot of time on these issues and developed a number of rules of thumb, almost all of which can be summarized by saying that you should look at a wide range of data, ideally smoothed over a longer period of time—without placing too much weight on any given indicator … Data description can be more sophisticated than just looking at single numbers or even trends. Sometimes it helps to decompose a number into its components to identify what is driving it, at least in an arithmetic sense. 
The second set of techniques we use is economic theory. Economic theory can sometimes give you a very helpful answer to a question. One of the biggest insights in economics is that some items are more valuable to one person than to another person, and if those two people trade things, they can both be better off. These are the basic motivations for a market economy and the basis of the argument for expanding international trade. … Let me give you one: the example of the allocation of electromagnetic spectrum …  Spectrum is a scarce resource and in many cases the rights to use it were allocated decades ago. Today, much of the best spectrum is reserved for the exclusive use of television broadcasters while users of smartphones and tablets often face frustrating and economically costly delays in accessing data because of crammed airwaves. In the Los Angeles area, for example, there were more than 25 broadcast stations, some of them with only a handful of viewers, most of whom could watch the shows on cable, online or through other means. A station with only a few thousand viewers might be worth $5 million, but at the same time may own a license for spectrum that a mobile broadband provider would be willing to buy for $50 million. We proposed to deal with this by setting up an incentive auction. Anyone who is a television broadcaster who wants to sell their spectrum can do so, and anyone who is a mobile broadband provider or anything else and wants to buy that spectrum can do so. The auction is entirely voluntary—a station will only sell if it is better off with cash than with spectrum, a mobile broadband provider will only buy if it (and presumably its consumers) benefit more from the spectrum than the cash they pay, and taxpayers get a cut of the difference in the bids to reflect the government’s role in organizing the process, including repackaging the spectrum into contiguous blocks to make it more valuable. Economic theory was enough to motivate and support this proposal—which ultimately resulted in $5 billion for taxpayers as well as profits for television broadcasters, mobile broadband providers, and benefits for their consumers. … In the last two decades, one of the fastest growing areas of economics has been “behavioral economics,” which relaxes the standard assumption that people are fully rational, and instead pays close attention to the ways people can be myopic, make decisions that depend on framing, and have limited attention spans or ability to incorporate information. One of the big successes of labor economics has been getting policy to focus less on the rate of return to savings and more on making it easier and more automatic to save. 
The third set of techniques we use is empirical work, trying to understand the causes and effects of different economic phenomena. I want to give you an example of a mistake I was involved in because I did not think hard about causation. At the end of 2008, I was working with Congress on legislation to raise the tax on tobacco products in order to pay for an expansion of the Children’s Health Insurance Program (CHIP). The main proposal was to raise the tax on a pack of cigarettes from $0.39 per pack to $1.01 per pack. But we also needed to set tax rates on a wide range of other tobacco products including roll-your-own tobacco, pipe tobacco, small cigars, large cigars, and more. Amidst everything that was going on at the end of 2008 with the Great Recession I did not pay enough attention to this issue, even though I once sat through what felt like an endless meeting on the topic. What came out of that meeting was a proposal to raise the tax rate on roll-your-own tobacco by more than $20 a pound while leaving the tax rate on pipe tobacco largely unchanged. What followed was a huge decline in the sale of roll-your-own tobacco and a huge increase in the sale of pipe tobacco … It turns out that roll-your-own tobacco and pipe tobacco are highly substitutable—not because people have shifted to smoking pipes, but because you can still put pipe tobacco in a piece of paper, roll it up, and smoke it. This is not just a minor, technical observation. It turns out to be highly consequential for public health. I have estimated that the 2009 tobacco tax increase will reduce the number of premature deaths due to smoking by between 15,000 and 70,000 for each cohort. But it would have reduced them even more if we had harmonized the tax rate on different tobacco products, as we did in a subsequent proposal. In fact, economists in the Treasury Department estimated that the reduction in tobacco consumption under a harmonization proposal would be nearly two and a half times the size it would be under an increase in the cigarette tax alone that raises comparable revenue. 
 
Fourth, we sometimes combine empirical description, theory, and causation to build models that allow us to simulate the impact of different policies. For example, such simulations can tell you how a tax cut will be distributed or affect the economy, how much a particular health plan will cost per person covered, or how much carbon emissions will change as a result of different policy approaches. While any model is limited and imperfect, models can be especially useful in quantifying plausible tradeoffs in designing a policy and communicating the impact of policies to policymakers.

It\’s interesting to me that of these four approaches, Furman lists basic description of data and basic theoretical insights first, ahead of the more complex modelling approaches. Here are a few more words from Furman:

In the Obama Administration I worked on everything from helping to prevent a second Great Depression to restructuring the post office—although I confess that we were only successful on one of these. Many of the topics I worked on are standard fare for economics, like the minimum wage, international trade, or environmental regulation. Others might be topics you do not associate with economists, like criminal justice reform, immigration, and sanctions. And on none of these did economists fully get their way which, ceteris paribus, is a good thing …

After the election many people expressed frustration that facts and analysis did not matter anymore. Paul Krugman cited the statistic that the three major networks spent a cumulative total of 32 minutes covering policy issues in the 2016 election. … I certainly share the sentiment that facts and analysis play too small a role not just in campaigns but in governing. And I think there is a role for many types of communication and persuasion. For myself, I am not someone who could give a rousing speech at a rally so I will stick to my comparative advantage. And I do believe that the right response to the lack of sufficient weight for facts and analysis is to embrace more facts and analysis.

For those who are interested in the Council of Economic Advisers and the role of academic economists in policy-making, here are a few more links: 

Snapshots of Merger and Acquisition Activity

Here\’s a figure showing seven waves of corporate mergers in the US since the 1851, according to the Institute for Mergers, Acquisitions & Alliances (a not-for-profit think tank that does research and workshops, and runs an educational certificate program, with main offices in New York, Vienna, Zurich, and Ho Chi Minh City). As the figure shows, the first three waves of merges have substantial periods of time between them. The most recent four waves are all since 1985.

Looking more closely at recent years, here is some additional information on these patterns. In this figure, the blue bars (and the left-hand axis) show the number of announced deals in each year, while the red line (and right axis) show the total value of the deals. For example, the number of deals rose from 2011 to 2014, but not all that sharply, while the value of the deals jumped considerably. I

In both this figure, and the one above, the \”wave\” of mergers in the 1980s doesn\’t look to me so much like a separate wave, but rather just a time of transition to the higher levels of mergers that would follow. In addition, from a longer historical perspective, it looks more as if the US economy has entered  period since the 1990s when mergers are generally at high levels–although sometimes higher than others.

For another perspectives, here is the number and value of mergers and acquisitions on a worldwide basis since 1985, which again shows either three \”waves\” or just a generally higher level of mergers since the late 1990s, depending on how you interpret it.

Of course, patterns like these don\’t prove anything by themselves, and I try not to be someone who over-extrapolates. But I\’ll just say that these merger and acquisition patterns are certainly consistent with a vision of the US and the global economy in which large companies seem relatively more focused on spending their funds and their managerial time on financial maneuvers that allow them to reduce their exposure to market pressures (either by combining with direct competitors or combining with firms along the supply chain), and thus relatively less focused on seeking to improve their competitive position by internal investments in capital, skills, and innovation.

"But When, Friend, Dost Thee Think?"

As final exams draw near at many colleges and universities (at least for those on a semester schedule), it seems appropriate to pass along this old story as told by Nicholas Murray Butler, President of Columbia University, at the 143rd Annual banquet of the Chamber of Commerce of the State of New York., 1911 (pp. 43-55), and available through the magic of the HathiTrust Digital Library

\”I cannot help recalling an admirable story which is told of ROBERT SOUTHEY, once Poet Laureate of England. SOUTHEY was boasting to a Quaker friend of how exceedingly well be occupied his time, how he organized it, how he permitted no moment to escape; how every instant was used; how he studied Portuguese while he shaved, and higher mathematics in his bath.

\”And then the Quaker said to him softly: `But when, friend, dost thee think? 

\”My impression is that we need now some time to think, in order that reflection and study of principle, and grasp upon realities, may take the place of perpetual discussion and exposition, partly of what is, partly of what never was, partly of what never can be.\”

Digital Forces and the Other 70% of the US Economy

For those feeling the need for a bracing dose of optimism about the prospects for US productivity growth, I recommend \”The Coming Productivity Boom: Transforming the Physical Economy with Information,\” written by Michael Mandel and Bret Swanson (March 2017), and published by the Technology CEO Council (which is more-or-less what it sounds like, a \”public policy advocacy organization comprising Chief Executive Officers from America’s leading information technology companies, including Akamai, Dell, IBM, Intel, Micron, Oracle, Qualcomm and Xerox.\”)

Mandel and Swanson split the US economy into two big chunk, which they call the \”digital economy\” and the \”physical economy.\” In the digital economy, which accounts for about 30% of US GDP, they argue that investment in information technology and productivity growth are doing fairly well. But in the physical economy, investment in information technology and productivity growth have been low. Thus, they argue that there is potential for rapid productivity growth in more than two-thirds of the US economy.

What are the industries in these two economic groupings? The digital economy is: \”Computer and electronics production; publishing; movies, music, television, and other entertainment; telecom; Internet search and social media; professional and technical services (legal, accounting, computer programming, scientific research, management consulting, design, advertising); finance and insurance; management of companies and enterprises; administrative and support services.\” The physical economy is: \”All other industries, including agriculture; mining; construction; manufacturing (except computers and electronics); transportation and warehousing; wholesale and

retail trade; real estate; education; healthcare; accommodations and food services; recreation.\” Obviously, their notion \”physical\” isn\’t just material goods, but also includes a number of the biggest service industries like health care and education–as long as what is ultimately being delivered is not actually digital in nature. 
After dividing up the economy in this way, here are a couple of the patterns they find. The first figure compares investment in information technology in the digital and physical sector; the other compares productivity growth in these two sectors.
They argue in some detail that with appropriate investments in  information technology, and the associated rethinking of production and output, substantial productivity growth is  possible in manufacturing, transportation, education, health care, wholesale/retail sales and other areas. One recent example is how how information technology, by allowing dramatically mapping of underground geological formations, has boosted production of natural gas from shale. 
What about manufacturing in particular? They write:

Government data shows that most domestic factories have not added much to their stock of information technology equipment and software over the past 10 years. Between 2004 and 2014, manufacturing IT capital stock increased by just $46 billion, and more than 65% of that gain was in the computer and electronics industry. … Leaving out the computer and electronics industry, the capital stock of IT equipment in the rest of manufacturing has barely grown since 2000. The capital stock of software in manufacturing is, likewise, barely higher now than in 2000. …

What about robots? According to trade  association data, 31,000 robots valued at
$1.8 billion were shipped to North American  customers in 2016. The spending on robots
pales next to the $300 billion in industrial equipment and manufacturing buildings that
corporations spent in the United States in 2016. In many cases, robots help the United
States retain jobs. Consider a modern semiconductor fab, which has installed a proprietary network of wafer-handling robots. This system probably reduced the number of wafer-handling jobs by several dozen. Yet the robots allowed the new fab to be built in the United States, instead of in a low-cost overseas location, thus saving or creating some 1,200 high-paying American jobs. 

The upside of robots in manufacturing spreading out into new industries is thus enormous. That’s crucial for increasing the productivity of existing manufacturing processes and  creating new processes altogether. In many ways manufacturing is the classic case where atoms will be boosted by bits. The process is already underway—but the diffusion of the Industrial Internet across the manufacturing sectors will take place over the next two decades. New, IT-enabled product categories, combined with design and customization that increasingly treats manufacturing as a service, will not necessarily bring back “old  jobs” but instead create new and better ones. … 

Economists are constitutionally suspicious of claims that large gains are out there, just waiting to be achieved. As economists like to say, \” If there\’s a $20 bill out there on the sidewalk, why hasn\’t someone already picked it up?\” Mandel and Swanson offer this answer: 

\”Why has it taken so long? It sounds like a tautology, but industries whose output is information are inherently more amenable to digitization. … But when we examine industries whose output is primarily physical, the game gets far more difficult. To digitize a complex physical object such as a spinning jet engine, an unknown natural environment such as a buried oil field, or a rapidly changing manmade environment such as the traffic and work patterns of a large city, requires a level of sophisticated technology that was not available until fairly recently. Low-cost sensors that can be widely distributed; high-bandwidth wireless networks capable of collecting the information from the sensor; computing systems capable of analyzing terabytes of data in real time; artificial vision that can make sense of images and artificial intelligence that can make decisions—each of these are necessary parts of applying IT to the physical industries. Continued advances and price reductions in sensing, cloud computing, and broadband connectivity, combined with new thinking and new focus about how to apply these technologies to physical problems, are finally about to open up the other four-fifths of the economy to the magical laws of Moore and Metcalfe. … Moore’s law, named after Intel founder Gordon Moore, refers to the tendency of silicon microchips to roughly double in cost performance (because of the industry’s remarkable ability to scale transistors and other chip features) every 18 months to two years. … Metcalfe’s law refers to the observation by Ethernet inventor Robert Metcalfe that the power or value of networks rises not by the number of connected nodes but by something resembling the square of the number of nodes. This is one reason “network effects” can be
so powerful.

My own sense is that the answer to \”why has it taken so long\” goes beyond technology. In the first decade of the 20th century, many firms in the physical economy were often focused on how to cut costs by outsourcing or setting up global production chains, The leading providers in industries like health care and education seem exceptionally set in their traditional ways, with practices are hard for either internal executives or external entrepreneurs to uproot. The Great Recession discouraged firms across the economy from investing for a time. But it does seem at least imaginable to me that these patterns are shifting. The most important infrastructure for encouraging economic productivity during  the first half of the 21st century won\’t be isn\’t roads and bridges, but a combination of secure information communication network and a reliable energy supply. 

Mandel in particular has a track record here. Back in the mid-1990s, when he was writing for Business Week, he wrote about the potential for real economic growth in what he called the \”new economy\” (for example, here) before it became evident to everyone else in the official statistics. 

Demand and the Frac Sand Example

Most people who teach economics are on a continual lookout for current examples of supply and demand. Otherwise, you end up falling back on  hypothetical goods like \”widgets\” and \”leets\” (which is \”steel\” spelled backward), at which point you can almost see the life and brightness fade out of the eyes of students.  nice lively example of shifts in demand comes from the demand for sand used in hydraulic fracking operations. As the Wall Street Journal recently reported, \”Latest Threat to U.S. Oil Drillers: The Rocketing Price of Sand: The market for a key ingredient in fracking is again surging,\”
(by Christopher M. Matthews and Erin Ailworth, March 23, 2017).

A couple of years ago, the USGS published \”Frac Sand in the United States—A Geological and Industry Overview,\” by Mary Ellen Benson and Anna B. Wilson, with a section on \”Frac Sand Consumption History\” contributed by Donald I. Bleiwas (Open File Report 2015-1107, posted July 30, 2015). The report includes this useful figure, in which the bars show the metric tons of sand used for fracking (measured on the left axis); the numbers above the bars show the number of horizontal drilling rigs in operation in the US during any given week of the year; and the line shows the value of the sand (right axis).

The basic lesson is fracking is up and it is using a lot more sand. If you look a little more closely at teh years from 2010 to 2012, you can see that the number of horizontal drilling rigs rose from 822 to 1,151, but the quantity of sand being used more than doubled.  This data can be updated a bit. According to the Baker Hughes North American Rotary Rig Count, the number of horizontal rigs dropped in 2015 and stayed fairly at this lower level in 2016, as the price of oil dropped, but more recently the number of horizontal rigs is rising again. 

For an update through 2016 on production levels and price, the USGS publishes an annual fact sheet on various minerals. Fracking sand falls into the category of \”Industrial Sand and Gravel,\” of which more than 70% is used for fracking. Here\’s the relevant table from the 2017 factsheet. Thus, back in 2012 there was about 50 million tons of total sand production, with about 70% of it going to fracking. By 2014, output of sand had doubled–mostly due to increased demand from fracking. The price per ton rose from $52 in 2010 to $106 per ton in 2014. Then output and price sagged in 2015 and 2016.

The Wall Street Journal story reports rising prices for sand this year. It also notes: \”In Louisiana, Chesapeake Energy Corp. recently pumped a record 50.2 million pounds of sand into a horizontal well roughly 1.8 miles long, piquing the interest of some rivals who are now weighing whether they can do the same.\” And here\’s a figure showing quantities of sand being used in different fields.

No world-changing lessons here. The higher prices for sand don\’t seem likely to make much difference in the quantity of fracking, least for now, because sand is a fairly small slice of the overall cost. Environmental concerns are being already being raised in some US locations about the extent of sand-mining, and those concerns are likely to become more acute as the demand for fracking sand rises. But my guess is that if it becomes difficult to increase the supply of fracking sand, then those doing the fracking will either find ways to economize on sand or will figure out some alternative substances that would fill a similar purpose.

Postscript: Some readers might want to revisit my post on  \”Demand for Sand\” (April 16, 2014), which has some additional background on international demand for sand (mostly for concrete and construction purposes) and some environmental effects of sand-mining.

The Global Systemically Important Banks: An Update

\”The large global banks were at the heart of the global financial crisis. In response to the crisis, the international Financial Stability Forum was upgraded to the Financial Stability Board (FSB) in 2009, with the full participation of finance ministers and even heads of government. The newly established FSB then published an integrated set of policy measures, such as capital surcharges and resolution plans, to address the systemic and moral hazard risks associated with global systemically important banks (G-SIBs). Eight years later, it is time to take stock of the impact of these measures. We answer three questions on what happened to the G-SIBs. First, have they shrunk in size? Second, are they better capitalised? Third, and in reference to the reported end of global banking, have they reduced their global reach?\”

Thus writes Dirk Schoenmaker in \”What happened to global banking after the crisis?\” written as a Policy Contribution for the Bruegel think-tank (2017, Number 7), The short answers to his three question are: 1) No; 2) Yes; 3) Yes. 
Schoenmaker offers a list of 33 global systemically important banks–that is, the banks that governments feel compelled to rescue out of fear that their failure could crash the financial systems of a country or two. Some examples of the larger ones include BNP Paribas, Deutsche Bank, and Groupe Crédit Agricole in Europe; Bank of America, JP Morgan Chase, Citigroup, and Wells Fargo in the US; Bank of China, China Construction Bank, and Industrial and Commercial Bank of China, all of course in China; HSBC and Barclays in the UK; and Mitsubishi UFJ FG and Sumitomo Mitsui FG in Japan. 
Here\’s a summary chart, with the top panel showing assets, reserves, and international reach of these 33 G-SIBs in 2007 and in 2015. From the bottom line of the two panels, the total assets of these institutions rose slightly from 2007 to 2015; their capital reserves rose fairly dramatically; and the share of their business done at home rather than regionally or globally declined. 
Schoenmaker has lots more to say about these patterns. For example, assets for these institutions have risen in China, Japan, and the United States, but fallen in the euro area and the UK. Capital reserves are quite a bit higher in the US and China than in the euro area. Bottom line: \”We conclude that reports on the death of global banking are greatly exaggerated.\”

When Restrictions on House-Building Meet Growing Demand: Interview with Joseph Gyourko

For a few years now, Joseph Gyourko has been doing research and writing essays with a focus on housing supply, and in particular, how a collision between local restrictions that hinder home-building combined with growing demand in those localities lead to differences in prices. In an interview with Hites Amir, Gyourko lays out these views in the April 2017 issue of the Global Housing Watch Newsletter. (The newsletter is produced by Amir, who works fo the International Monetary Fund, but it is not an official IMF document).

\”In forthcoming work, Ed Glaeser and I conclude that most housing markets in the interior of the country function so that the price of housing is no more than the sum of its true production costs (the free market price of land plus the cost of putting up the structure) plus a normal entrepreneurial profit for the homebuilder. That is what we teach should happen in our introductory microeconomics courses—namely, that price paid by consumers in the market should equal the real resource cost of producing the good (housing in this case). These well-functioning housing markets exist in a broad swath of the country outside of the Amtrak Corridor in the Northeast (Washington, D.C. to Boston) and the major West Coast markets from Seattle all the way down to San Diego. The bulk of the population lives in these well-functioning markets, by the way. They just are not focused on by the media. …

\”Restrictions began to be imposed in many west coast markets in the 1970s, with east coast markets in the northeast following the next decade. Thus, it is the people who owned in those markets at those times who enjoyed the most appreciation in their homes. Those people tend to be senior citizens today, and even if they do not earn relatively high incomes, they are wealthy because of the real capital gains on their homes. …  

\”If they limit supply sufficiently relative to demand, then the existing housing units will be rationed by price. Richer households will be able to bid more to live in their preferred markets, so we get the sorting of the rich into the higher priced coastal markets. … The divergence in home prices between low cost, elastically supplied markets and high cost, inelastically supplied markets has been growing over time—since 1950, at least. …  This can generate a spiral up in prices, and it can last a long time. In 1950 for example, housing in the most expensive metropolitan areas was about twice as costly as in the average market. At the turn of the century in 2000, the most expensive metros were at least four times more costly than the average market. I expect that gap to widen over the coming decade. …

\”In a well-functioning market with elastic supply, prices should be equal to fundamental production costs. In most American markets, that is no more than $200,000-$250,000.\”

 Here\’s a figure, which is a version of a diagram that appears in \”Superstar Cities,\” by Joseph Gyourko, Christopher Mayer, and  Todd Sinai in the American Economic Journal: Economic Policy, November 2013 (5:4, pp. 167-99). The figure shows what the distribution of the average house value across US metropolitan statistical areas (MSAs) in 1950 and 2000. Average prices are higher in 2000 than in 1950, which isn\’t surprising, since the average house is bigger, too. But the key insight here is that the distribution of average housing prices across cities in 1950 is more bunched together, while the distribution across cities in 2000 is much wider, with a long right-hand-tail.

Fig2

This wider distribution of housing prices across cities has broader trade-offs. It becomes harder for those living in cities with lower housing prices to relocate to cities with higher housing prices. It creates a greater level of economic segregation, because those with higher incomes are more likely to end up living in a smaller number of cities where they can afford the high housing prices, while those with middle-level or lower-level incomes struggle to find alternatives.  It adds a dose of bitterness to local arguments over zoning, because current owners of high-priced houses (whether they have owned a house for decades or bought more recently) live in fear that an increase in the supply of housing–or even building more moderately-sized and -price housing–might in some way limit whether their own home will keep rising in price. Gyourko says:

\”I am a traditional housing and urban economist in this regard. New construction imposes real costs on its immediate neighbors and the broader community. Pollution is one, with congestion (on the roads, schools, etc.) being another. It is economically efficient to internalize these negative externalities by ‘taxing’ the developer in some way so the full costs of development are priced and paid for by the builder. What I am not in favor of is excessive regulation that imposes costs much higher than could be justified by spillovers. That results in too little development and creates affordability problems for the poor and the middle class.\”

In this hyperpartisan time in which we live, I feel compelled to add that the goal of rolling back local restrictions that limit house-building is advocated not just by the usual free-market suspects who tend to lean right politically, but was also supported by the Obama administration. For example, President Obama said in a speech to the US Conference of Mayors in January 2016: \”We can work together to break down rules that stand in the way of building new housing and that keep families from moving to growing, dynamic cities.\” The Obama administration also published a \”Housing Development Toolkit\” (September 2016) to provide strategies for overcoming excessive local barriers to home-building.

Seeking Consensus on Effects of Pre-Kindergarten Programs

When describing  how many economists at least try to think about the problems of the world, I often repeat the title of a lovely little book that Alan Blinder wrote back in 1988: Hard Heads, Soft Hearts: Tough-minded Economics For A Just Society.  In the context of expanding provision of pre-kindergarten programs, even a modest degree of soft-heartedness cries out for trying to assist small children who, through no action or fault of their own, otherwise seem likely to start school well behind their peers. But hard-headedness (and curiousity) demands that such programs be evaluated.

For a readable overview of what is actually known, rather than hoped, a useful starting point is \”The Current State of Scientific Knowledgeon Pre-Kindergarten Effects\” what was put together by a \”Pre-Kindergarten Task Force of interdisciplinary scientists\” convened by the Brookings Institution and the Duke University Center for Child and Family Policy. The participants are: \”Deborah A. Phillips of Georgetown University, Mark W. Lipsey of Vanderbilt University, Kenneth A. Dodge of Duke University, Ron Haskins of the Brookings Institution, Daphna Bassok of the University of Virginia, Margaret R. Burchinal of the University of North Carolina at Chapel Hill, Greg J. Duncan of the University of California-Irvine, Mark Dynarski of the Brookings Institution, Katherine A. Magnuson of the University of Wisconsin-Madison, and Christina Weiland of the University of Michigan \”

A short summary of the findings would be that a number of pre-K programs have a short-term effect in helping children be more ready for kindergarten, especially for children from disadvantaged backgrounds. But not all programs show such effects in the short-term, and whether the effects continue or fade after a few years of schooling is quite unclear. Here is their consensus statement:

\”The Task Force reached consensus on the following findings, conclusions, and recommendation: 

\”Studies of different groups of preschoolers often find greater improvement in learning at the end of the pre-k year for economically disadvantaged children and dual language learners than for more advantaged and English-proficient children. 

\”Pre-k programs are not all equally effective. Several effectiveness factors may be at work in the most successful programs. One such factor supporting early learning is a well implemented, evidence-based curriculum. Coaching for teachers, as well as efforts to promote orderly but active classrooms, may also be helpful. 

\”Children’s early learning trajectories depend on the quality of their learning experiences not only before and during their pre-k year, but also following the pre-k year. Classroom experiences early in elementary school can serve as charging stations for sustaining and amplifying pre-k learning gains. One good bet for powering up later learning is elementary school classrooms that provide individualization and differentiation in instructional content and strategies. 

\”Convincing evidence shows that children attending a diverse array of state and school district pre-k programs are more ready for school at the end of their pre-k year than children who do not attend pre-k. Improvements in academic areas such as literacy and numeracy are most common; the smaller number of studies of social-emotional and self-regulatory development generally show more modest improvements in those areas. 

\”Convincing evidence on the longer-term impacts of scaled-up pre-k programs on academic outcomes and school progress is sparse, precluding broad conclusions. The evidence that does exist often shows that pre-k-induced improvements in learning are detectable during elementary school, but studies also reveal null or negative longer-term impacts for some programs. 

\”States have displayed considerable ingenuity in designing and implementing their pre-k programs. Ongoing innovation and evaluation are needed during and after pre-k to ensure continued improvement in creating and sustaining children’s learning gains. Research-practice partnerships are a promising way of achieving this goal. These kinds of efforts are needed to generate more complete and reliable evidence on effectiveness factors in pre-k and elementary school that generate long-run impacts. 

\”In conclusion, the scientific rationale, the uniformly positive evidence of impact on kindergarten readiness, and the nascent body of ongoing inquiry about long-term impacts lead us to conclude that continued implementation of scaled-up pre-k programs is in order as long as the implementation is accompanied by rigorous evaluation of impact.\”

I suppose it is almost inevitable that a group of academics end up advocating additional research. The volume follows this consensus statement with a number of short and readable individual essays on various aspects of pre-K education. Here, I\’ll add a few thoughts of my own.

1) This summary of the state of evidence about pre-K programs is quite mainstream. Indeed, I\’ve previously posted here and here about summaries that reached similar conclusions.

2) Some of the highest estimates of returns to pre-K education are probably not generalizable to a broader program. For example, in a short chapter on \”The Costs and Benefits of Scaled-Up
Pre-Kindergarten Programs\” in this volume, Lynn A. Karoly writes (footnotes omitted):

\”Estimates of the high returns from investing in high-quality pre-k programs largely rest on two pre-k program impact evaluations: the Perry Preschool program where the returns based on the age-40 follow-up are estimated to be as high as 17-to-1, and the Chicago Child-Parent Centers (CPC) program, where the impact estimates as of the age-26 followup indicate returns of close to 11-to-1. The Perry Preschool program, while well known, is also acknowledged to be a small-scale demonstration program, implemented in the 1960s with exceptionally high standards and serving a highly disadvantaged population of children at a time period when children in the control condition do not have alternative pre-k options. For these reasons, the estimated returns represent more of a proof of the principle that high-quality pre-k programs can produced positive economic benefits, rather than definitive evidence of the economic returns that would be expected from scaled-up programs.

\”The Chicago CPC program is arguably a scaled-up part-day program operated by the Chicago Public School district and targeted to children in low-income neighborhoods … However, the Chicago CPC program may also be viewed as exceptional because the program evaluation focuses on a cohort of children that attended the program in the early 1980s, with impacts that may not be replicated in today’s environment.\”

Also, Karoly estimate the cost of a \”school-day\” pre-K program–that is, 6 hours per day–at about $8,000 per student: a little higher if the teachers are paid typical kindergarten wages.

3) One of the most careful and comprehensive studies of pre-K education published in 2013, evaluating the federal Head Start program, found a nearly complete fade-out of any positive effect of pre-K by third grade. The study said:

\”In summary, there were initial positive impacts from having access to Head Start, but by the end of 3rd grade there were very few impacts found for either cohort in any of the four domains of cognitive, social-emotional, health and parenting practices. The few impacts that were found did not show a clear pattern of favorable or unfavorable impacts for children.\”

4) I have started to wonder if effective interventions for disadvantaged children need to come considerably earlier than pre-K. Some evidence suggests that for a number of disadvantaged children, the gap in their cognitive skills emerges somewhere between 9 months and two years of age. Back in 2013, Richard V. Reeves, Isabel Sawhill and Kimberley Howard described this evidence in an essay called  \”The Parenting Gap,\” in which they pointed out that the federal government spends 25 times as much on Head Start as it does on programs targeted at parents of those same children for their first few years of life. In other work, Douuglas Almond and Janet Currie have argued that differences in cognitive abilities and other developmental measures can arise even before birth in \”Killing Me Softly: The Fetal Origins Hypothesis,\” published in the Summer 2011 Journal of Economic Perspectives (where I work as Managing Editor). 

In short, I worry that hard-headed analysis, together with a soft-hearted concern for disadvantaged children, should be pushing us toward interventions that will assist disadvantaged children well before they are old enough for a pre-K program. 

Don\’t Fear the (McCormick) Reaper

The McCormick reaper is one of the primary labor-saving inventions of the early 19th century, and at a time when many people are expressing concerns about how modern machines are going to make large numbers of workers obsolete, it\’s a story with some lessons worth remembering. Karl Rhodes tells the story of the arguments over who invented the reaper and the wars over patent rights in  \”Reaping the Benefits of the Reaper,\” which appears in the Econ Focus magazine published by the Federal Reserve Bank of Richmond (Third/Fourth Quarter 2016, pp. 27-30). Here, I\’ll lay out some of the lessons which caught my eye, which in places will sound similar to modern issues concerning innovation and intellectual property.

The reaper was important, but it didn\’t win the Civil War

The reaper was a horse-drawn contraption for harvesting wheat and other grains. Rhodes quotes the historian William Hutchinson who wrote: \”Of all the inventions during the first half of the nineteenth century which revolutionized agriculture, the reaper was probably the most important,\” because it removed the bottleneck of needing to hire lots of extra workers at harvest time, and thus allowed a farmer \”to reap as much as he could sow.\”

But somewhere along the way, I had imbibed a larger myth, that the labor saving properties of the reaper helped the North to with the Civil War by allowing young men who would otherwise have been needed for the harvest to become soldiers. However, Daniel Peter Ott in a 2014 PhD dissertation on \” Producing a Past: Cyrus McCormick\’s Reaper from Heritage to History.\” Ott traces the claim that the reaper helped to win the Civil War back to some  promotional materials for the centennial celebration of the reaper in 1931 produced by International Harvester which included this statement:

\”Secretary of War Stanton said: ‘The reaper is to the North what slavery is to the South. By taking the place of regiments of young men in western harvest fields, it released them to do battle for the Union at the front and at the same time kept up the supply of bread for the nation and the nation’s armies. Thus, without McCormick’s invention I feel the North could not win and the Union would have been dismembered.’\”

Ott argues persuasively that this quotation is incorrect. Apparently, Edwin Stanton was a patent attorney before he became Secretary of War for President Lincoln, and he was arguing in court in 1861 that McCormick\’s reaper deserved an extension of his patent term. Ott quotes a 1905 biography of Edwin Stanton, written by Frank A. Flower, which included the following quotation attributed to an 1861 patent case, in which Stanton argued:

\”The reaper is as important to the North as slavery to the South. It takes the place of the regiments of young men who have left the harvest fields to do battle for the Union, and thus enables the farmers to keep up the supply of bread for the nation and its armies. McCormick’s invention will aid materially to prevent the Union from dismemberment, and to grant his prayer herein is the smallest compensation the Government can make.\”

There doesn\’t seem to be any documentary evidence directly from the 1860s on what Stanton said. But it appears plausible that the he argued as a patent attorney in 1861 that the McCormick reaper deserved a patent because it could help to with the Civil War, and that comment was later transmuted by a corporate public relations department into a claim that Secretary of War Stanton credited the reaper with actually winning the Civil War.

New innovations can bring conflict over intellectual property

Oded Hussey patented a reaper in 1833. Cyrus McCormick patented a reaper in 1834. By the early 1840s, Hussey had sold more reapers than McCormick. But the idea of a mechanical reaper had been in the air for some time. Joseph Gies offered some background in \”The Great Reaper War,\” published in the Winter 1990 issue of Invention & Technology Gies wrote:

\”In 1783 Britain’s Society for the Encouragement of Arts, Manufactures, and Commerce offered a gold medal for a practical reaper. The idea seemed simple: to use traction, via suitable gearing, to provide power to move some form of cutting mechanism. By 1831 several techniques had been explored, using a revolving reel of blades, as in a hand lawn mower; a rotating knifeedged disk, as in a modern power mower; and mechanical scissors. Robert McCormick had tried using revolving beaters to press the stalks against stationary knives. Cyrus McCormick and Obed Hussey both chose a toothed sickle bar that moved back and forth horizontally. Hussey’s machine was supported on two wheels, McCormick’s on a single broad main wheel, whose rotation imparted motion to the cutter bar. Wire fingers or guards in front of the blade helped hold the brittle stalks upright.\”

Hussey and McCormick then both argued in 1848 for their patent rights to be extended for a first time. By 1861, at the time patent attorney Stanton made his comments about the importance of the reaper, Hussey\’s patent rights had been extended for a third time, and McCormick wanted his patent rights extended, too. Indeed, the New York Times editorialized on July 6, 1860, against giving McCormick another extension of the patent, writing: 

\”A few words in reply to the Tribune-McCormick \”facts,\” which are assumed to justify the proposed extension. The first two of these may be summed up as follows: The patentee failed to secure extensions of his first two patents — and, therefore, his original invention and improvements covered by his first and second patents are free by all mankind Granted; and we claim that for the precise reasons which led to the rejection of his petitions for the extension of his original patents, his now pending petition should be treated in the same manner. Those reasons were that the \”improvements\” are non-essential, and that the patentee has received ample remuneration for his invention.
But, says Mr. MCCORMICK, \”OBED HUSSEY obtained an extension of his patent, and therefore mine should also be extended.\” Two wrongs never yet made one right; and it was the fact that HUSSEY\’s patent was renewed by the late acting Commissioner, unjustifiably, which, more than anything else, induced Congress to take the McCormick case out of his hands. The public were wronged in the Hussey case; and it is therefore still more incumbent upon the Patent Office to see to it that another wrong is not added to the list. … It is notorious that MCCORMICK will have made over $2,000,000 on his reapers by the time his present patents shall have expired, and as the new Commissioner of Patents has a reputation for honesty to maintain, there is little danger that the monopoly will be renewed.\” 

Indeed, the arguments over reaper patents from the 1840s up into the early 1860s were some of the early battles in what is sometimes called \”the first patent litigation explosion.\” One early result was that the Patent Act of 1861 removed the discretionary power of the Patent Office to extent patents by seven years at a time, and instead created the 17-year patent term. 


Innovations can take decades before coming into widespread use 

Although versions of the reaper were patented in the first half  of the 1830s, they didn\’t come into widespread use until the 1850s and later. Why not? A body of research in economic history tackled that question several decades ago, and Rhodes provides a nice compact summary of how thinking on this question evolved:

The traditional explanation for this surge in sales was the rapid rise of global wheat prices during the Crimean War, which limited grain exports from Russia and other nations in the Black Sea region. But in the 1960s, Stanford University economist Paul David offered another primary explanation: He argued that before the mid-1850s, most American farms were simply too small to make reapers practical.

The average farm size was growing, however, as grain production shifted from the East to the Midwest, where arable land was fresh, fertile, and relatively flat. More importantly, the farm-size threshold for the reaper to be practical was declining as the price of labor — relative to the price of reaping machines — increased in the Midwest due to higher demand for workers to build railroads and other infrastructure throughout the fast-growing region, David wrote.

In the 1970s, Alan Olmstead, an economist at the University of California, Davis, agreed that factor prices and farm sizes were important, but he argued that … [f]armers often cooperated to use reapers on multiple farms, a possibility that David had excluded from his model. Olmstead also faulted David for assuming that there were no significant advances in reaper technology between 1833 and the 1870s. This assumption that the reaper was born fully developed grew into a \”historical fact,\” Olmstead wrote, even though it ignored \”extremely knowledgeable historians who emphasized how a host of technological changes transformed an experimentally crude, heavy, unwieldy, and unreliable prototype of the 1830s into the relatively finely engineered machinery of the 1860s.\”

For those who would like to go back to original sources, Paul David\’s article was \”The Mechanization of Reaping in the Ante-Bellum Midwest,\” published in a 1966 volume edited by Henry Rosovsky, , Industrialization in Two Systems: Essays in Honor of Alexander Gerschenkron,  pp. 3-39.  Alan L. Olmstead\’s essay is \”The Mechanization of Reaping and Mowing in American Agriculture, 1833-1870,\” in the June 1975 issue of the Journal of Economic History,  pp. 327-352.

The slow diffusion of technology is a widespread finding: for example, I\’ve posted on this blog about the examples of tractors and dynamo-generated electrical power in  \”When Technology Spreads Slowly\” (April 14, 2014).  It\’s a useful warning and reminder to those who seem to believe, often implicitly, that the internet and data revolution is pretty much over at this point, and we\’ve seen pretty much all the changes we\’re going to see.

Entrepreneurship isn\’t just about technology, but also needs marketing, manufacturing, and continual improvement

Why is the reaper often known in the history books as the \”McCormick reaper,\” when Hussey patented first. The answer from shows up in the subtitles of articles. For example, Rhodes subtitles his article: \”Cyrus McCormick may not have invented the reaper, but he was the entrepreneur who made it successful.\” Similarly, Gies writes in the subtitle of his 1990 article: Cyrus McCormick won it—his famed Virginia reaper came to dominate America’s harvests—but he didn’t win by building the first reaper or, initially, the best.\”

But McCormick shows a persistent entrepreneurial drive. He moved to Chicago, so that his reapers would have better access to markets in the midwest and west, leaving the eastern market to Hussey. He continually improved the reaper. He pushed ahead on manufacturing and marketing.  Rhodes writes:

But based on overlapping information from sources cited by both sides of the family, it seems likely that Cyrus and Robert both contributed to the McCormick reaper of 1831. And so did their slave, Jo Anderson, and so did a local blacksmith, John McCown. It also seems possible that Cyrus and Robert obtained knowledge of previous attempts to develop a practical reaper. ,,, [But]  who supplied the entrepreneurial power that brought the reaper into common use? And the answer is clearly Cyrus McCormick.\”

Personnel is Policy: Presidential Appointments

There are over 1,200 positions in the US government (not counting judges or military appointments) that require the US Senate to confirm a candidate nominated by the president. (These are a subset of about 3,700 positions that require the president to appoint someone, but most of the positions in this broader group don\’t require Senate confirmation.) Often, the people appointed to these positions have a reasonable degree of day-to-day discretion in decision-making: in that sense, as those inside the DC beltway like to say, \”personnel is policy.: As President Trump pushes ahead with his proposed appointments, what is the historical record of success for such appointments in the US Senate? Anne Joseph O’Connell compiles the evidence in \”Staffing federal agencies: Lessons from 1981-2016,\” a report written for the Brookings Institution (April 17, 2017).

Here\’s a figure showing the success rate of presidential appointments in receiving Senate confirmation going back to the 97th Congress, during the 1981-82 at the start of President Reagan\’s first term.

Executive Branch Failure Rates Transparent

I\’ll mostly leave it to readers to sort through the years and whether the president was facing a Senate of the same party. But a few points seem worth noting: 

1) It\’s common for presidents to have 20% or more of their nominees not make it through Senate confirmation, especially later in presidential terms. 

2) The share of presidential nominees not making it through US Senate confirmation has been rising over time, and for President Obama, 30% of his nominees didn\’t make it through the Senate. 

3) Perhaps unexpectedly, the share of President Obama\’s nominees who didn\’t make it through the Senate was only a bit higher during his last two years in 2015-2016, when Republicans controlled the Senate, than it was during 2013-2014, when Democrats controlled the Senate. Moreover, remember that in November 2013, the Democrats running the US Senate changed the rules so that it was no longer possible to filibuster presidential appointees. But as O\’Connell points out: 

\”Failure rates, however, increased for free-standing executive agencies, within and outside the Executive Office of the President, and for national councils. More surprisingly, confirmation delays for agency nominations increased across the board. For those successful 2013 nominations (a few of which were actually confirmed in December after the change), they took 95 days, on average. In 2014, delays ballooned to 150 days. Indeed, it was the biggest jump in any given year in an administration between any two presidents.\”

4) O\’Connell also makes the interesting point that often about 15-20% of these theoretically appointed positions, or more, are not filled at any given time over the years. She writes that President Obama appointed fewer people, and as a result ended up leaving positions open more often. 

\”Interestingly, President Obama submitted fewer nominations (2828) than any of the other two-term presidents. President George W. Bush submitted the highest (3459); Presidents Reagan and Clinton each submitted around 3000 (2929 and 3014, respectively)…. President Obama submitted fewer nominations than his predecessors, allowing acting officials to fill in for many important positions. President Trump could follow suit.\”

My own sense is that far too many low-level nominations are held up for dubious and extraneous reasons by individual or small groups of senators. For example, late in the Obama administration the board that is supposed to oversee the US Postal Service had zero members out of the nine possible appointments. The reported reason is that Senator Bernie Sanders put a hold on all possible appointees, as a show of solidarity with postal workers. If it isn\’t obvious to you how Sanders preventing President Obama from appointing new board members would influence the US Postal Service in the directions that Sanders would prefer, given that President Trump could presumably appoint all nine members of the board, you are not alone. 
This system of 1,200-plus presidential appointments requiring Senate approval seems dysfunctional but that doesn\’t alter the reality that these 1,200-plus appointments are among the most important steps a president can make.