Macroprudentialism (An E-book)

In the old days of macroeconomics, say up to 2007, macroeconomic policy was almost entirely about fiscal and monetary policies. But in the last few years, an alternative called \”macroprudential policy\” has risen up. The notion is to affect the macroeconomy by using financial regulations about the permissable extent of bank capital and collateral lrequirements, consumer borrowing, margin requirements for financial trades, rules governing what derivatives are allowable, and more. Janet Yellen has argued that when it comes to financial stability, and the risks it poses to macroeconomic stability, macroprudential policy needs to play a primary role. Here\’s a discussion of the past use of what we would now call macroprudential tools in the U.S. economy.

For a useful starting point to the topic, Dirk Schoenmaker has edited an e-book called Macroprudentialism, a VoxEU.org eBook from the Duisenberg School of Finance and the Center for Economic Policy Press, which includes 15 short chapters from various perspectives. Here is a scattering of some of the comments about macroprudential regulation that especially struck me.

Anil K Kashyap, Dimitrios P Tsomocos and Alexandros P. Vardoulakis: \”While virtually every central banker in the world is on record supporting the concept of ‘macroprudential regulation’, there is still no agreed upon definition of what it means or how best it should be implemented.

Paul Tucker: \”Legislators have typically favoured rules-based regulation. That is for good reason: it
helps to guard against the exercise of arbitrary power by unelected officials. But a static rulebook is the meat and drink of regulatory arbitrage, which is endemic in finance.  Finance is a ‘shape-shifter’.
That makes it hard to frame a regime that keeps risk-taking in the system as a whole within tolerable bounds. Instead, excessive risk-taking is likely to migrate to less regulated or unregulated parts of the system. Thus, with the re-regulation of de jure banks currently underway, some of the economic substance of banking will, again, inevitably re-emerge elsewhere. For example, anybody holding low-risk securities can, in principle, build themselves a shadow bank by lending out their securities for cash and investing the proceeds in a riskier credit portfolio. …

\”This shape-shifting dynamic can leave policymakers in a game of catch-up, responding only as each metamorphosis becomes systemically significant. Unless they are empowered to respond flexibly, it is a game they are doomed to lose. By the time the products of regulatory arbitrage are evidently systemically significant, those in the driving street will likely have the lobbying power to delay or derail reform. The powerful forces mobilised to oppose reform of the globally significant US money market fund industry illustrate that in capital letters.

\”A number of implications for the design of macroprudential regimes flow from these features of the financial world. First, it will not be sufficient for bank regulation to be dynamically adjusted. It will also be necessary, for example, to vary minimum collateral (margin, haircut) requirements in derivatives and money markets when a cyclical upswing is morphing into exuberance; to tighten the regime applying to a corner of finance that is shifting from systemic irrelevance to a systemic threat; and to tighten the substantive standards, not only the disclosure standards, applying to the issuance
of securities when the pattern of aggregate issuance is driving or facilitating excessive borrowing by firms or households. That means, second, that if finance remains free to innovate, adapt and reshape itself, every kind of financial regulator must be in the business of preserving stability. That needs to be incorporated into their statutory mandates and, more generally, into the design of regulatory agencies.\”

Charles A. E. Goodhart: \”As the Global Financial Crisis struck, central banks were saddled with two objectives at the same time: price stability and financial stability. With the policy interest rate
predicated to achieve price stability, we needed a second instrument to maintain financial stability; hence macroprudential instruments. … As long as macroprudential instruments are able to vary the capital ratio applicable to loans, they could be effective, but only time will show how effective. … I have argued that central banks have now been allocated responsibility for financial stability, whether keen to do so or not. If so, it would seem odd not to also give them command over the main levers (i.e. instruments) for achieving such stability. Moreover, several of these instruments involve either imposing requirements on banks – e.g. state-varying capital requirements – or changes to the central bank’s own portfolio – e.g. acting as market-maker of last resort via credit expansion (CE) – that would seem necessarily to be within the natural province of central bank decision-making.\”

Claudio Borio: \”There is no doubt that macroprudential frameworks must be part of the solution to the perennial quest for the so far elusive goal of lasting financial stability. Adopting a more systemic orientation in prudential arrangements is essential. But intellectual pendulums have a habit of swinging too far. There is a risk of entertaining unrealistic expectations about what macroprudential schemes can do on their own. If these expectations become entrenched in policy, there is even an outside risk that, far from being part of the solution, macroprudential frameworks could paradoxically
become part of the problem. Complacency is always not too far around the corner. If the quest for financial stability has proved so elusive, it must be for a reason. Put differently, macroprudential policy must be part of the answer but it cannot be the whole answer. Other policies also need to play their part, not least monetary and fiscal policy. And making the most of macroprudential frameworks calls for a mix of ambition and humility – ambition to make systematic use of the available tools;
humility in recognising their limitations.

Wolf Wagner: The typical regulatory cycle looks as follows. An unwanted behaviour in the financial
system is observed and is attributed to a market failure. Policymakers devise a policy that specifically targets this failure. Upon implementation, it is then discovered that the policy does not work. This is because financial institutions circumvent the spirit of the policy by shifting into economically equivalent activities that are not affected by regulation. In addition, the responses of market participants often lead to undesirable outcomes in other parts of the financial system. The apparent failure of regulation in turn leads to a series of new, and increasingly complex, measures, which by themselves bring about further unintended consequences. …

Naively designed macroprudential policies are likely to have unintended effects. Due to the inherently complex nature of systemic policies, the scope for such side effects is much larger than for traditional policies, and may easily come to outweigh the benefits. Policymakers need to step up their efforts in making sure that new macroprudential policies are incentive-compatible and do not distort the behaviour of participants in the financial system.

Snapshots of Islamic Banking

Islamic banking is apparently being rebranded with a new name, \”participation banking,\” at leat according to the World Islamic Banking Competitiveness Report 2014–15: Participation Banking 2.0, which is published by the firm that used to be called Ernst & Young, before it was rebranded to EY in 2013.

The idea of Islamic banking has been around for awhile now: for example, the Journal of Economic Perspectives (where I\’ve worked since 1987 as managing editor) ran an article touching on the subject back in the Fall 1995 issue \”Islamic Economics and the Islamic Subeconomy\” (9:4, 155-173). Then as now, there is some contoversy over the extent to which Islamic banking involves changing the labels on financial contracts without (much) altering the financial flows, or whether it represents a fundamentally different kind of banking arrangement. I remember talking to a friend at a U.S. mortgage financing firm who told me that if someone walks in the door and wants to apply for a mortgage, he would reach into a file in his desk and pull out the paperwork. If they said they wanted an Islamic mortgage, he would reach into a different file in his desk and pull out differentpaperwork with different wording and a religious stap of approval–but from the company\’s internal accounting point of view, the two sets of paperwork described identical obligations about payments.

For a more recent overview of how Islamic banking works from an academic point of view, one starting point is \”Islamic vs. Conventional Banking: Business Model, Efficiency and Stability,\” by
Thorsten Beck, Asli Demirgüç-Kunt and Ouarda Merrouche, an October 2010 Policy Research Working Paper from the World Bank. They write:

Sharia-compliant finance does not allow for the charging of interest payments (riba) as only goods and services are allowed to carry a price. On the other hand, Sharia-compliant finance relies on the idea of profit-, loss-, and risk-sharing, on both the liability and asset side. In practice, however, Islamic scholars have developed products that resemble conventional banking products, replacing interest rate payments and discounting with fees and contingent payment structures. In addition, leasing-like products are popular among Islamic banks, as they are directly linked to real-sector transactions. Nevertheless, the residual equity-style risk that Islamic banks and its depositors are taking has implications for the agency relationships on both sides of
the balance sheet as we will discuss below. … Together, our empirical findings suggest that conventional and Islamic banks are more alike than previously thought. Differences in business models – if they exist at all – do not show in standard indicators based on financial statement information. Other differences, such as cost efficiency, seem to be driven more by country rather than bank type differences.

So how does it work to have a \”bank\” which can\’t charge interest to borrowers, nor pay interest to consumers? The answer helps to explain why EY and others are starting to use the name \”participation banking.\” Kuran explained in JEP in 1995:

The literature on Islamic banking does not specify how a depositor and his bank, or the bank and a borrower, are to apportion risk. It insists only that each of the parties to a financial contract must bear some share of the risk. In principle, one side could carry just one-twentieth of the risk, although some writers caution that the risk shares must conform to customary notions of fairness.

Thus, Islamic banks accept various kinds of deposits, which have varying degrees of participation in profits that a bank earns from its lending. However, while the \”profit share\” received by ordinary bank depositors typically does fluctuate a bit in the short-term, it looks a lot like an interest payment in the medium- and the long-run. Indeed, some Islamic banks also pay \”bonuses\” to depositors when they feel it is useful.

On the lending side, there are again a variety of financial contracts with different risk-sharing properties. Kuran describes one of them:

Let us say a cash-poor industrialist needs a new computer. His Islamic bank buys the computer, marks up its price, and then transfers to him the computer\’s ownership; in return, our industrialist agrees to pay the bank the marked-up price in a year\’s time. If the predetermined markup rate were identical to the prevailing interest rate, this murābaha contract would be essentially equivalent to an interest-based contract. But there would still be one difference, which Islamic economics considers critical: during the period when the computer was owned by the bank, the bank would carry all the risks of ownership, including those of theft, fire, and breakage. In practice, however, the bank\’s ownership generally lasts just a few seconds, so its exposure to risk is negligible. Ordinarily, therefore, murābaha serves as a cumbersome form of interest.

Or as the Beck, Demirgüç-Kunt and Merrouche team explains:

Similarly, operating leases (Ijara) where the bank keeps ownership of the investment good and rents it to the client for a fee are feasible financial transactions under Sharia-law. While the discounting of IOUs and promissory notes is not allowed under Sharia-law as it would involve indirect interest rate payments, a similar structure can be achieved by splitting such an operation into two contracts, with full payment of the amount of the IOU on the one hand, and a fee or commission for this pre-payment, on the other hand.

In short, Islamic banking is also \”participation banking\” because it requires a degree of participation by bank depositors, banks, and those who are receiving financing. But the extent of participation often seems to be flexible.

The EY report sidesteps these kinds of issues completely, but offers some snapshots of the current size of the Islamic banking or participation banking sector, and some business challenges confronting it. Apparently Islamic banks in six countries — Qatar, Indonesia, Malaysia, Saudi Arabia Turkey, and UAE — represent over 80% of the global assets of \”participation\” banks. Growth in participation banking has been rapid in recent years.

However, participation banking remains a minority of the banking sector in most countries where it exists (as shown on the vertical axis). Thus, Islamic banks must compete against conventional banks.

Although Islamic banking has been growing briskly, the EY report notes two difficulties for participation banks looking ahead. First, the return on equity has been slightly lower in participation banks over time. Second, EY looked at 2.2 million comments in online sources about satisfaction with Islamic banks. The report concludes: \”Results show that for many Participation banks, consumer satisfaction is, at best, mediocre. … Participation banks need to seriously strengthen customer experience on offer …\”

Thus, the business challenge for Participation banks is clear. They can point to prominent endorsements from Islamic scholars, which should give them a competitive edge in countries with a substantial Muslim population, and Islamic banks can surely sustain themselves with this loyal pool of potential customers. But do depositors and those receiving finance from Islamic banks end up, over time and on average, with financial products that are on nonreligious dimensions inferior to what they would have preferred? On the other side, if Islamic banks use carefully chosen relabelling of financial flows so that they offer products that are largely identical in terms of financial flows and risk characteristics to what is available from conventional banks, then their identity as \”Islamic\” banks and their built-in pool of loyal customer may diminish. After all, I\’m confident that a number of conventional banks can find ways to link themselves with Islamic values in the minds of many bank depositors and businesses, too.

Uncertainty and Management: Interview with Nicholas Bloom

Renee Haltom interviews Nicholas Bloom in Econ Focus, published by the Federal Reserve Bank of Richmond (Second Quarter, 2014, pp, 22-26).  I\’ve worked with Bloom twice in articles for the Journal of Economic Perspectives. In the Winter 2010 issue, he wrote with John Van Reenen abou their research on \”Why Do Management Practices Differ across Firms and Countries?\” (24:1, 203-24). In the Spring 2014 issue, he wrote about \”Fluctuations in Uncertainty,\” (28:2, 153-76). These two subjects are also the central topics in this interview.

Jumps in Uncertainty

The old example of an uncertainty shock that I used in my Ph.D. work in the early 2000s was 9/11. This eventgenerated a spike in every measure of uncertainty. Then the Great Recession hit, and this made the 9/11 uncertainty spike look like a small blip. Measures of uncertainty — like the VIX index of stock market volatility [the Chicago Board Options Exchange Market Volatility Index], which measures the market’s expected volatility over the next 30 days — went up by about 500 percent. Similarly, newspaper indices of uncertainty jumped up by about 300 percent. Even the Federal Reserve’s Beige Book had a surge of discussion of uncertainty — before the Great Recession, each month had about three or four mentions of the word “uncertain,” but after the Great Recession it hit nearly 30.

Interestingly, the Great Depression of 1929-1933 was another period where there was broad concern over uncertainty. Newspaper coverage of uncertainty and stock market volatility rose sharply in this period. In fact, one of Ben Bernanke’s key papers before he became Fed chairman was, amazingly, on how uncertainty can impair investment. Christina Romer, chair of President Obama’s Council of Economic Advisers during the Great Recession, had studied uncertainty too. So some of the key policymakers in Washington at the time were acutely aware of what uncertainty could do to an economy. 

How Policy Can Generate Uncertainty

It may be that policy actions generate more uncertainty damage than help. One reason is that policymakers have an incentive to be policy hyperactive. I saw this when I worked in the U.K. Treasury. Politicians had to be seen as acting in response to bad events; otherwise, the public and media claimed they were not responding or, worse, claimed they didn’t care. So politicians would act, often based on partial information or hastily developed ideas, when often the best course would be to stay calm and inactive. So hasty or unpredictable policy response to recessions can actually make the recessions worse. A classic example is the accelerated depreciation allowance that Congress debated introducing for several months after the 9/11 attacks. Many commentators argued that this delayed the recovery as businesses waited to see what the decision would be. In fact, the Nov. 6, 2001, FOMC minutes even contained an explicit discussion of the damaging policy uncertainty this introduced.

Uncertainty and the Great Recession

The full experiment is this: If you held everything else constant and did not have the rise in uncertainty, what would have happened to the drop in economic output? I think, based on some rough calculations I lay out in my 2014 Journal of Economic Perspectives paper, that the recession would have been about one-third less. So I think uncertainty was a major factor, though not the biggest factor, which I think was a combination of the housing and financial crises.

On the Importance of Management to Productivity Differences

Economists have, in fact, long argued that management matters. Francis Walker, a founder and the first president of the American Economic Association, ran the 1870 U.S. census and then wrote an article in the first year of the Quarterly Journal of Economics, “The Source of Business Profits.” He argued that management was the biggest driver of the huge differences in business performance that he observed across literally thousands of firms. Almost 150 years later, work looking at manufacturing plants shows a massive variation in business performance; the 90th percentile plant now has twice the total factor productivity of the 10th percentile plant. Similarly, there are massive spreads across countries — for example, U.S. productivity is about five times that of India. Despite the early attention on management by Francis Walker, the topic dropped down a bit in economics, I think because “management” became a bad word in the field. Early on I used to joke that when I turned up at seminars people would see the “M-word” in the seminar title and their view of my IQ was instantly minus 20. Then they’d hear the British accent, and I’d get 15 back.

Some Drivers of Good Management 

I think the key driver of America’s management leadership has been its big, open, and competitive markets. If Sam Walton had been based in Italy or in India, he would have five stores by now, probably called “Sam Walton’s Family Market.” Each one would have been managed by one of his sons or sons-in-law. … 

The absence of rule of law is a killer for good management. If you take a case to court in India, it takes 10 to 15 years to come to fruition. In most developing countries, the legal system is weak; it is hard to successfully prosecute employees who steal from you or customers who do not pay their invoices, leading firms to use family members as managers and supply only narrow groups of trusted customers. This makes it very hard to be well managed — if most firms have the son or grandson of the founder running the firm, working with the same customers as 20 years ago, then it shouldn’t be surprising that productivity is low. These firms know that their sons are often not the best manager, but at least they will not rampantly steal from the firms.

Future Research on Management 

These have been major milestones in management technologies, and they’ve changed the way people have thought. They were clearly identified innovations, and I don’t think there’s a single patent among them. These management innovations are a big deal, and they spread right across the economy. In fact, there’s a management technology frontier that’s continuously moving forward, and the United States is pretty much at the front with firms like Walmart, GE, McDonald’s, and Starbucks. And then behind the frontier there are a bunch of laggards with inferior management practices. In America, these are typically smaller, family-run firms. … 

Anything that can be said to be “high” or “low” can be quantified, and economics is good at this; it’s one of our strengths as a social science. … There’s an old saying: What gets measured gets managed. I think in economics it’s what gets measured gets researched. …  Likewise with management — we hope if we can build a new multifirm and multicountry database, we can spur the development of the field.

Economic Gains from Shale Oil and Gas

It\’s clear that extracting oil and gas from shale formations has led to an economic boom in North Dakota. But what is a plausible estimate of how it will affect the U.S. economy as a whole? The Congressional Budget Office addresses this question in \”The Economic and Budgetary Effects of Producing Oil and Natural Gas From Shale\” (December 2014). To set the stage for their findings, it\’s useful to remember three points.

1) The amount of energy used relative to GDP has been declining in the U.S. for decades, with the ongoing shift to an economy where the role for services is relatively larger compared to manufacturing. Here\’s a figure showing energy consumption relative to GDP, with the ratio in 1950 set equal to 1.0. You can see that even in the last couple of decades, since about 1990, the ratio has fallen by about one-third (from .6 to .4). And the projections are that in the next few decades, the ratio will fall by almost another 50% (from .4 to .2).  When energy is less important to an economy, it follows that cheaper and more available energy has less of an effect.

Graph of primary energy consumption, as explained in the article text

2) Oil prices are set in a global market, because oil is transported fairly easily around the world. Thus, any effect of additional U.S. oil production on prices is relatively mild, because it is spread over the global economy. However, natural gas is not transported easily all around the world, and won\’t be without some major infrastructure investments. So additional U.S. production of natural gas can have a more substantial effect on natural gas prices in the U.S. over the next few decades.

3) When thinking about how shale oil and gas benefit the economy over the long-term, you can\’t just look at revenues from these products or jobs in these areas. Over the long run, more investment in this area is offset to some extent by less investment in other areas (both in energy and in other industries), and more workers in this area are offset to some extent by fewer workers in other areas.
With the US. economy in or recovering from recession in the last six years, the investment and job-creation benefits of additional energy have been especially important, because they are a bright spot in an economy with a lot of slack. Over the longer term, the broader gains from shale oil and gas across the economy arise from a reallocation of labor and capital toward more productive uses. This includes not just those actually drilling for oil and gas, but all the downstream effects of, say, electrical utilities and manufacturing companies that can benefit from lower natural gas prices.

Taken as a group, these points suggest that the economic effects of shale oil and gas on the U.S. economy are real and positive, but perhaps less than some commentators may be implying

There\’s no dispute that the unconventional drilling techniques have dramatically increased U.S. output of oil and natural gas–in fact, making the U.S. the single largest natural producer. The CBO writes (footnotes omitted):

Recent advances in combining two drilling techniques, hydraulic fracturing and horizontal drilling, have allowed access to large deposits of shale resources—that is, crude oil and natural gas trapped in shale and certain other dense rock formations. … Virtually nonexistent a decade ago, the development of shale resources has boomed in the United States, producing about 3.5 million barrels of tight oil per day and about 9.5 trillion cubic feet (Tcf ) of shale gas per year. Those amounts equal about 30 percent of U.S. production of liquid fuels (which include crude oil, biofuels, and natural gas liquids) and 40 percent of U.S. production of natural gas.

Here\’s a figure showing overall U.S. production and consumption of natural gas and \”liquid\” fuels like oil. As you can see, the projections are that the U.S. will be producing more natural gas than it consumes in the near future, but will continue to consume more oil than it produces. Given the current high costs of shipping natural gas outside of North America, it makes some sense (on both economic and environmental grounds) to find ways to expand use of natural gas for electricity generation and manufacturing in place of oil or coal–and indeed, such a shift is already taking place.

The CBO report looks at economic gains from shale oil and gas in the short run, and the long run.  In the short-run, it lists the following.

Increased Output of Oil and Gas. Shale development has increased U.S. output of tight oil and shale gas, raising GDP. The market value of shale gas produced in 2013 (reflecting the contributions of both the gas industry and the other industries that supply goods and services used to produce shale gas) was about $35 billion. In the same
year, the market value of tight oil, including natural gas plant liquids produced by hydraulic fracturing, was about $160 billion. Combined, sales of shale gas and tight oil therefore totaled about $195 billion, or roughly 1.2 percent of GDP.

Increased Investment in the Oil and Gas Industry and in Supporting Industries. Shale development has probably raised GDP in recent years through greater spending on the development of new wells. Between 2004 and 2012, investment in the oil and gas extraction industry increased from 0.4 percent of GDP to 0.9 percent. …

Increased Investment and Production in Other Industries. Industries that use natural gas intensively—such as the steel, petrochemical, fertilizer, and electricity industries—have expanded production to take advantage of energy prices that are lower than they would have been without shale development.  …

Increased Demand. Higher employment resulting from shale development, along with a larger capital stock resulting from increased investment in the development and use of shale resources, has led to higher household income and thus greater demand for goods and services. Some of that increased demand has been met by the additional production from the energy-intensive industries described above. However, much of the increase has been for products supplied by firms that do not directly benefit from lower natural gas and oil prices. In order to meet the increased demand, those firms have increased employment and investment, which has raised GDP still further in the short term.

What about economic benefits in the longer run? Here\’s a sample of the CBO analysis:

Shale development will raise GDP in the longer term in two ways: increasing the productivity of existing labor and capital, and increasing the amount of labor and capital in use. CBO estimates that, as a result, real GDP will be 0.7 percent higher in 2020 and 0.9 percent higher in 2040 than it would have been without shale development, although those estimates are subject to considerable uncertainty. The longer-term effects of shale development on GDP will probably be smaller than the near-term effects described above …

Shale development raises GDP by increasing the productivity of labor and capital. That increased productivity is projected to make GDP  0.4 percent higher in 2020 and 0.5 percent higher in 2040 than it would have been in the absence of shale development. … CBO estimates that the value of the tight oil and shale gas produced in both 2020 and 2040 will be about 1.3 percent of real GDP. But in the absence of hydraulic fracturing and horizontal drilling, CBO estimates, the labor and capital now projected to be used
to produce that output would contribute only about 1.0 percent to GDP in 2020 and about 0.9 percent in 2040. The boost to GDP from reallocating labor and capital into the production of tight oil and shale gas is the difference between those estimates: about 0.3 percent of GDP in 2020 and 0.4 percent in 2040. … 

The rest of the increased productivity comes from labor and capital that are used more efficiently elsewhere in the economy because of increased consumption of oil and
gas. As energy-intensive products and methods of production grow cheaper, the same amount of output can be produced with less labor and capital. For example, as the cost of generating electricity from gas has fallen, some electric utilities have increased their productivity by switching from coal to gas. Through such shifts, GDP will be about 0.1 percent higher in both 2020 and 2040 than it would have been without shale development, CBO estimates. … 

Shale development will also raise GDP by increasing the amounts of labor and capital used in the economy, in CBO’s assessment. That increase will happen in at least two ways. First, the increase in output generated by higher productivity that was described above will result in additional income; part of that income will be saved and then invested, increasing the capital stock. Second, the higher productivity will increase wages, improving the return to workers from each hour of work and encouraging them to work more. Because of those effects, CBO estimates, GDP will be 0.3 percent higher in 2020 and about 0.4 percent higher in 2040 than it would have been without shale
development. … 

Shale development confers an economic benefit that raises the standard of living in the
United States but does not show up as greater GDP. Specifically, increased net exports of natural gas and oil boost the value of the dollar, making imports cheaper and allowing consumers to buy more and businesses to invest more for a given quantity of exports and a given amount of GDP. CBO has not quantified that effect, however.

Reading about a gain of 0.7% of GDP, it\’s easy to feel a little disappointed or dismissive? All this fuss over 0.7%? Why bother? But to choose a convenient number, say that U.S. GDP is about $20 trillion in 2020. Then, 0.7% would be equal to $140 billion of economic gains. Shale oil and gas is no panacea for the long-run issues facing the U.S. economy, and America is in no way on a path to becoming a resource-dependent economy like Russia or Venezuela. But the U.S. economy is so enormous that no single industry is capable of making a huge difference, not all by itself. Instead, the future of the U.S. economy will need a number of key industries to step up–and it looks as if oil and gas drilling can be one of them.

P.S. In this post, I\’ve not discussed the environmental issues involved with shale oil and gas. However, in previous posts I\’ve argued that if certain \”golden rules\” of environmental protection are applied, the environmental benefits plausibly outweigh the costs. Indeed, I support what I call the \”Drill Baby Carbon Tax,\” which would combine moving ahead on developing U.S. fossil fuel resources with all deliberate speed, but also imposing a tax to reduce emissions of carbon, as well as taking step to address other air pollutants and greenhouse gas issues.

Focusing Behavioral Economics on Development Professionals

A torrent of research in economics and psychology in the last quarter-century has focused on looking at how people actually make decisions. As many of us who have vowed to start a diet or get more exercise or save more money can attest, the ways in which real-world people make decisions can get in the way of how we actually behave in accomplishing our goals. The 2015 World Development Report from the World Bank, with the theme of \”Mind, Society, and Behavior,\” offers an useful overview of the way in which these issues of \”behavioral economics\” affect the welfare of low-income people around the world. But at least to me, the the single most striking part of the report is that it focuses the lens of behavioral economics not just on people in low-income countries, but also on development professionals themselves.

The first few chapters of the report are organized around three groups of common behavioral biases
\”First, people make most judgments and most choices automatically, not deliberatively: we call this “thinking automatically.” Second, how people act and think often depends on what others around them do and think: we call this “thinking socially.” Third, individuals in a given society share a common perspective on making sense of the world around them and understanding themselves: we call this “thinking with mental models.”\” The next six chapters explore how these factors play out in the context of poverty, early childhood development, houshold finance, productivity, health, and climate change.

This part of the report is full of interesting studies. For example, here is an example about thinking automatically:

Fruit vendors in Chennai, India, provide a particularly vivid example. Each day, the vendors buy fruit on credit to sell during the day. They borrow about 1,000 rupees (the equivalent of $45 in purchasing parity) each morning at the rate of almost 5 percent per day and pay back the funds with interest at the end of the day. By forgoing two cups of tea each day, they could save enough after 90 days to avoid having to borrow and would thus increase their incomes by 40 rupees a day, equivalent to about half a day’s wages. But they do not do that. … Thinking as they always do (automatically) rather than deliberatively, the vendors fail to go through the exercise of adding up the small fees incurred over time to make the dollar costs salient enough to warrant consideration.

In this as in many other examples, it can sometimes seems in the behavioral economics literature that people are trapped by their own inconsistancies and unable to accomplish their own desires–unless they are nudged along by assistance from beneficent and well-intended aid of an all-seeing policymaker. But the lessons of behavioral economics apply to everyone, the policymaker as much as the typical citizen. The World Bank report, greatly to its credit, applies some of the exercises that reveal behavioral biases directly to its own development professionals. It\’s should come as no surprise, of course, that they exhibit similar biases to everyone else.

One example of \”thinking automatically\” is a \”framing effect\”–that is,  how a question is framed has a strong influence on the outcome. A typical finding is that that people are loss-averse, meaning that they react differently when a choice is framed in terms of losses than if the same options are offered, but phrased instead in terms of gains. Here\’s the WDR:

One of the most famous demonstrations of the framing effect was done by Tversky and Kahneman (1981). They posed the threat of an epidemic to students in two different frames, each time offering them two options. In the first frame, respondents could definitely save one-third of the population or take a gamble, where there was a 33 percent chance of saving everyone and a 66 percent chance of saving no one. In the second frame, they could choose between a policy in which two-thirds of the population definitely would die or take a gamble, where there was a 33 percent chance that no one would die and a 66 percent chance that everyone would die. Although the first and second conditions frame outcomes differently—the first in terms of gains, the second in terms of losses—the policy choices are identical. However, the frames affected the choices students made. Presented with the gain frame, respondents chose certainty; presented with a loss frame, they preferred to take their chances. The WDR 2015 team replicated the study with World Bank staff and found the same effect. In the gain frame, 75 percent of World Bank staff respondents chose certainty; in the loss frame, only 34 percent did. Despite the fact that the policy choices are equivalent, how they were framed resulted in drastically different responses.

As another example, the ability to interpret straightforward numerical data diminishes when that data describes a controversial subject. In one devilishly clever experiment, people were divided into two random groups and presented with the same data pattern. However, some were told that the data was about the non-emotive subject of how well a skin cream worked, while others were told that it was about the effectiveness of gun control laws. People were quite accurate at describing the findings of data when it referred to skin cream, but they became much less accurate if the same data were supposed to be describing gun control. The World Bank researchers took this same data. The experts who were given this data as applying to skin cream interpreted in clearly. But the experts who were given the same data as applying to whether a higher minimum wage reduced the poverty rate became much less accurate–apparently because their interpretation was clouded by preexisting prejudices.

Yet another example focuses on the issue of sunk costs. A problem in many projects, including development projects, is that once a lot of money has been spent there is pressure not to abandon the project, even when it becomes clear that the project is doomed. Here\’s the scenario from the WDR:

The WDR 2015 team investigated the susceptibility of World Bank staff to sunk cost bias. Surveyed staff were randomly assigned to scenarios in which they assumed the role of task team leader managing a five-year, $500 million land management, conservation, and biodiversity program focusing on the forests of a small country. The program has been active for four years. A new provincial government comes into office and announces a plan to develop hydropower on the main river of the forest, requiring major resettlement. However, the government still wants the original project completed, despite the inconsistency of goals. The difference between the scenarios was the proportion of funds already committed to the project. For example, in one scenario, staff were told that only 30 percent ($150 million) of the funds had been spent, while in another scenario staff were told that 70 percent ($350 million) of the funds had been spent. Staff saw only one of the four scenarios. World Bank staff were asked whether they would continue the doomed project by committing additional funds. While the exercise was rather simplistic and clearly did not provide all the information necessary to make a decision, it highlighted the differences among groups randomly assigned to different levels of sunk cost. As levels of sunk cost increased, so did the propensity of the staff to continue.

A final example looks at mental models that development experts have of the poor. What do development experts think that the poor believe, and how does it compare to what the poor actually believe? For example, development experts were asked if they thought individuals in low-income countries would agree with the statement: \”What happens to me in the future mostly depends on me.\”  The development experts thought that maybe 20% of tthe poorest third would agree with this statment, but about 80% actually did. In fact, the share of those agreeing with the statement in the bottom third of the income distribution was much the same as for the upper two-thirds–and higher than the answer the devleopment experts gave for themselves!

Similarly, development experts thought that about half of those in the bottom third of the income distribution would agree with the statement: \”I feel helpless in dealing with the problems of life.\” This estimate turns out to be roughly true for Nairobi, an overstatement for Lima, and wildly out of line with beliefs in Jakarta. Presumably, these kinds of answers are important in thinking about how to present and frame development policy.

As a final example, the development experts thought that about 40% of the bottom third in the income distribution would agree with the statement: \”Vaccines are risky because they can cause sterilization.\” Turns out that this is accurate for Lima, but a wild overstatment for Nairobia and Jakarata. Again, such differences of opinions are obviously quite important in designing a public health campaign to encourage more vaccination.

The report summarizes the implications of these kinds of studies unflinchingly:

Experts, policy makers, and development professionals are also subject to the biases, mental shortcuts (heuristics), and social and cultural influences described elsewhere in this Report. Because the decisions of development professionals often can have large effects on other people’s lives, it is especially important that mechanisms be in place to check and correct for those biases and influences. Dedicated, well-meaning professionals in the field of development—including government policy makers, agency officials, technical consultants, andfrontline practitioners in the public, private, and nonprofit sectors—can fail to help, or even inadvertently harm, the very people they seek to assist if their choices are subtly and unconsciously influenced by their social environment, the mental models they have of the poor, and the limits of their cognitive bandwidth. They, too, rely on automatic thinking and fall into decision traps. Perhaps the most pressing concern is whether development professionals understand the circumstances in which the beneficiaries of their policies actually live and the beliefs and attitudes that shape their lives …\”

The deeper point here is of course not just about the World Bank or development experts, but about all policymakers–and especially policymakers who are seeking to use findings from behavioral economics nudge ordinary citizens to alternative courses of action. Policymakers are subject to thinking automatically, social pressures, and misguided mental models as well. Some of the possible answers involve finding ways to challenge groupthink, perhaps by having a group designated to make the case for the other side, or by finding another way to force a more thorough consideration and discussion of alternatives. Another suggestion is that \”development professionals should “eat their own dog food”: that is, they should try to experience firsthand the programs and projects they design.\”

In a US context, perhaps every member of Congress should have to do the following: fill out their own taxes personally by hand, and suffer the legal penalties for any mistakes; personally order health insurance from one of the newly-created exchanges; and personally fill out applications for that parents must complete for college student loans and that must be completed for Food Stamps and Medicaid assistance. We are too often governed by people who have gotten used to paying others to fill out the paperwork of life.

Full disclosure: I reviewed and offered suggestions on an intermediate version of this report several months ago, and was paid an honorarium for doing so.

Biosciences Innovation

When thinking about future technology and how it may affect economic growth, it\’s common enough, for obvious reasons, to focus on the possibilities related to information and communications technology: new possibilities for the internet, robotics, driverless cars, and so much more. It\’s worth remembering that the powers of computation can combine with biological research to bring breakthroughs in other  areas too. William Hoffman draws attention to \”The Shifting Currents of Bioscience Innovation\” in an article earlier this year published in Global Policy.  William Hoffman  and Leo Furcht also have a just-published book out on the subject called
The Biologist\’s Imagination: Innovation in the Biosciences.

Here\’s are a couple of figures from the Hoffman article to illustrate the shockingly rapid change in bioscience innovation in recent decades. This figure shows global population over time, with the timing of various important discoveries shown as well. Robert Fogel did an earlier version of this figure, while Hoffman added detail on biosciences innovations. Innovation in this area has gone from antibiotics and the polio vaccine to sequencing the human genome and synthetic bacterial cells in a little more than half-century.

For a sense of the speed of change in this area, think about the gains from \”Moore\’s law,\” the relationship first pointed out back in 1965 by Gordon Moore, one of the founders of Intel Corporation that the number of transistors on a computer chip was doubling every two years. That rate of growth has held up pretty well since thenm and seems likely to continue for at least a few more years (for earlier discussions of Moore\’s law on this blog, see here and here). The results of the information technology revolution are all around us. The key point here is that the cost of sequencing a human-sized genome has been falling faster than Moore\’s law since about 2007.

Up until now, the best-known economic payoffs from this biosciences have been pharmaceuticals (the group of medicines and drugs often called  \”biologics\”) and genetically modified crops (which have effects both on food products and on related outputs like biofuels). But my sense is that a wide variety of industrial and even household applications may not be far behind. Here\’s a sample of Hoffman\’s argument:

Cutting-edge tools from genomics and bioinformatics, cellular technologies including stem cells, and synthetic biology, with assists from nanotechnology and automation, are poised to revolutionize bioscience productivity. These tools make it possible to sequence and synthesize DNA at an industrial scale, edit genes precisely, control the growth and differentiation of cells and seed them in three-dimensional (3D) constructs, and create microbial factories that produce medicines, chemicals, fuels and materials. They are transforming traditional models of drug discovery and development and diagnostic testing. The more DNA, RNA, and cellular components fall under the purview of bioengineers, the likelier we are to see large-scale production of renewable fuels, biodegradable materials, and safer industrial chemicals.

Genomics is opening a window on genetic alleles that enable food crops to adapt to a changing climate, and synthetic biology is being used to design novel environmental remediation systems. Using 3D printers puts science into the hands of people ‘whether in the far corners of Africa or outer space’ so that they can print drugs on demand. They can be modified to print cells including stem cells, which are key to cellular differentiation and tissue repair. Digitally enabled bioprinting means on-demand tissue and organ production for surgical modelling, medical therapy, drug testing and science education.

At the most basic level, the idea that useful materials can be grown as needed, not just manufactured from raw materials, seems to me a potentially vast breakthrough. A couple of years ago I read the article \”Form and Fungus: Can mushrooms help us get rid of Styrofoam?\” by Ian Frazier in the New Yorker magazine (May 20, 3013). He tells the story of Ecovative, which essentially uses fungus, like clever mushrooms, as its production process to replace Styrofoam. Frazier writes:

The packing material made by their factory takes a substrate of agricultural waste, like chopped-up cornstalks and husks; steam-pasteurizes it; adds trace nutrients and a small amount of water; injects the mixture with pellets of mycelium [this is the fungus part]; puts it in a mold shaped like a piece of packing that protects a product during shipping; and sets the mold on a rack in the dark. Four days later, the mycelium has grown throughout the substrate into the shape of the mold, producing a material almost indistinguishable from Styrofoam in form, function, and cost. An application of heat kills the mycelium and stops the growth. When broken up and thrown into a compost pile, the packing material biodegrades in about a month.

It turns out that when you start thinking about the properties of funguses grown in shaped molds, all sorts of possibilities arise, like the potential for pieces of insulation or even replacements for wood. As the rapidly developing power of the biosciences begins to interact with this new mindset–which I think of as \”grow it, don\’t manufacture it\”–I suspect that a wide array of goods and services will be affected.

Hoffman is also up-front that the future \”bioeconomy\” may raise some difficult ethical and safety questions. The questions can in some cases be difficult and worth deep consideration. But I\’ll add that there is no reason to believe that the answers to these questions will be determined by the politicians, regulators, scientists, and citizens of high-income countries like the United State or the European Union. Top-level biosciences research is happening all over the world, very much including China, India, Brazil, and other locations. A revolution in biosciences is coming, whether or not the U.S. decides to let its domestic researchers and domestic companies participate fully.

Milk Production: Economies of Scale, Agriculture, Management

I\’m always on the lookout for real-world examples of economies of scale: that is, situations where expanding the scale of output leads to lower average costs of production. I offer a range of examples and thoughts on the subject here, and an example of economies of scale in financial asset management here. But a number of vivid examples of economies of scale come from the U.S. agricultural sector.

James MacDonald and Doris Newton offer an example in \”Milk Production Continues Shifting to Large-Scale Farms,\” which appears in the December 1, 2014 issue of Amber Waves, published by the U.S. Department of Agriculture. They point out that the number of small dairy farms is falling, while the number of large farms is rising:

In 2012, there were still nearly 50,000 dairy farms with fewer than 100 cows, but that represented a large decline from 20 years earlier, when there were almost 135,000. Over the same period, the number of dairy farms with at least 1,000 cows more than tripled to 1,807 farms in 2012. Movements in farm numbers were mirrored by movements in cow inventories. Farms with fewer than 100 cows accounted for 49 percent of the country’s 9.7 million milk cows in 1992, but just 17 percent of the 9.2 million milk cows in 2012. Meanwhile, farms with at least 1,000 cows accounted for 49 percent of all cows in 2012, up from 10 percent in 1992.

If you graph the underlying data, the mean or average size of a dairy herd has more than doubled in the last 20 years, from 61 to 144. But tthe midpoint of the herd size–that is, the herd size where half of all cows are in a herd that is larger and half are in a herd that is smaller–has gone from 101 cows back in 1992 to 900 cows in 2012.

Perhaps unsurprisingly, the main driver behind this change is that larger dairy herds can produce milk at a lower average cost. Here\’s the pattern from MacDonald and Newton. They write: \”While some small farms earn profits and some large farms incur losses, financial performance is linked to herd size. Most of the largest dairy farms generate gross returns that exceed full costs, while most small and mid-size dairy farms do not earn enough to cover full costs. Full costs include annualized costs of capital as well as the cost of unpaid family labor (measured as what they could earn off the farm), in addition to cash operating expenses. … In 2012, dairy farms with at least 2,000 cows incurred costs that were 16 percent lower, on average, than farms with 1,000-1,999 cows, a difference that could provide a spur to further structural change to even larger farms. In 1992, there were just 31 farms with 3,000 or more milk cows; by 2012, there were 440, and many of them had 5,000 or more cows.\”

The economies of scale in dairy farms is just one example of larger scale in U.S. agriculture as a whole. Daniel A. Sumner explores this topic in \”American Farms Keep Growing: Size, Production, and Policy,\” in the Winter 2014 issue of the Journal of Economic Perspectives. (Full disclosure: I\’ve been Managing Editor of the JEP since the first issue in 1987. All JEP articles back to the first issue are freely available online courtesy of the American Economic Association.)

Sumner offers this image. The horizontal axis shows the proportion of farms, where you can think of farms as ranked by size from smallest in sales to largest. The vertical axis shows the proportion of agricultural output. Thus, the bottom 60% of farms represent about 5% of all agricultural revenue. The line for 1987 is highest, for 1997 is lower, and for 2007 is lowest, which shows that smaller farms are producing a lower share of total output over time.

Sumner argues that farm subsidies can\’t explain this growing concentration, nor can contractual relationships in the agricultural chain of production. The size of farm operations is growing both in agricultural sectors with and without farm subsidies, and with and without a high dependence on contractual relationships. He argues that the most likely factor driving economies of scale in agriculture is the interrelationship between technological developments and good management: that is, being the kind of  strong manager who can take advantage of new technology has been a continual and even a growing advantage in U.S. agriculture. Sumner concludes along these lines (citations omitted):

The size of commercial farms is sometimes best-measured by sales, in other cases by acreage, and in still other cases by quantity produced of specific commodities, but for many commodities, size has doubled and doubled again in a generation. That does not mean that typical commercial farm operations are becoming large by any nonfarm corporate standard, or that there is any near-term prospect that these large firms will be able to exercise market power. For example, even as the typical herd size of dairy farms rises from 500 cows to 1,000 to 2,000, there will remain thousands of commercial farms operating the national milk cow herd of eight or nine million cows. The few dairy farms with 10,000 cows are located in several units in distinct locations and remain a small share of the relevant national and international market into which they deliver. …

In some industries, such as intensive animal feeding, farms are often operated as franchises in which farms are connected closely with larger processing and marketing firms through contractual relationships. Many commodity industries have traditionally used contractual relationships between farms and processors or marketers to coordinate timing of shipments and commodity characteristics. For example, the processing tomato industry links growers and processors in annually negotiated contracts, and wineries work closely with contracted grape growers, often providing long-term guarantees to encourage vineyard development. Growth in farm size in these industries has occurred at roughly the same pace as for commodity industries with fewer contractual relationships. Economists do not yet have a good understanding of the relationships between contractual relationships, farm size patterns, and productivity, and this remains an area of active research.

Changes in farm size distributions and growth of farms seems closely related to technological innovations, managerial capability, and productivity. Opportunities for competitive returns from investing financial and human capital in farming hinge on applying managerial capability to an operation large enough to provide sufficient payoff. Farms with better managers grow, and these managers take better advantage of innovations in technology, which themselves require more technical and managerial sophistication. Farms now routinely use outside consultants for technological services such as animal health and nutrition, calibration and timing of fertilizers and pesticides, and accounting. The result is higher productivity, especially in reducing labor and land per unit of output. Under this scenario, agricultural research leads to technology that pays off most to more-capable managers who operate larger farms that have lower costs and higher productivity. The result is reinforcing productivity improvements.

Subsidy programs seem to be relatively unimportant in the evolution of farming in the United States. Farm sizes are growing, numbers of commercial farms are falling, and farm operations are transforming industries with and without commodity subsidies. In specific instances and for specific commodities, farm programs have affected the patterns of farm size and growth. 

MONIAC in Action!

It is part of the lore of economics that Alban William Housego Phillips, better known as Bill Phillips, and still better known as the originator of the Phillips curve–which posits a tradeoff between unemployment and inflation–started his career by building a hydraulic model of the economy called the MONIAC.

MONIAC stood for Monetary National Income Analogue Computer, which is a bit of wordplay on ENIAC, the Electronic Numerical Integrator and Computer, which had been announced in 1946 as the first general-purpose electronic computer. MONIAC is a physical model of the economy in which flows of consumption, saving and investment, taxes and government spending, imports and exports, and other economic forces were represented by liquid moving through tubes and pipes. You can tinker with different elements of the economy, and see what effects it has.

What I had not known until running across this article by Klint Finley in the most recent issue of Wired magazine is that a Cambridge engineering professor Allan McRobie has restored a MONIAC. Moreover, McRobie offers a lively 45-minute demonstration of the MONIAC at work on video here. As he says at the start: \”It is a fabulous pleasure to demonstrate this. It is a thing of wonder and joy, and I would give this talk to an empty room. It is a brilliant machine, and a privilege for me to work with it.\” If you have similar feelings about economics and economic models, you are likely to have similar feelings about his talk.

For more on the MONIAC, as well as how Irving Fisher also built a hydraulic model of an economy as part of his doctoral dissertation back in 1891, you can start with my post from a couple of years ago (November 12, 2012) on \”Hydraulic Models of the Economy: Phillips, Fisher, Financial Plumbing.\” As I wrote there, after discussing the Phillips and Fisher hydraulic models:

The idea of a hydraulic computer seems anachronistic in these days of electronic computation, but I can imagine that an illustrative teaching tool, watching flows of liquid rebalance might be at least as useful as looking at a professor sketching a supply and demand diagram. In addition, the notion of the economy as a hydraulic set of forces still has considerable rhetorical power. We talk about \”liquidity\” and \”bubbles.\” The Federal Reserve publishes\”Flow of Funds\” accounts for the U.S. economy. When economists talk about the financial crisis of 2008 and 2009, they sometime talk in terms of financial \”plumbing.\” …  I find myself wondering about what an hydraulic model of an economy would look like if it also included bubbles, runs on financial institutions, credit crunches–along with tubes that could break. Sounds messy, and potentially quite interesting.

Women, Mathematical Skills, Academia

Focus on the so-called STEM departments in academia: that is, science, technology, engineering and mathematics. There is a fairly clear pattern that women are less well-represented in the academic departments that rely on higher mathematical skills. The harder questions are explaining this phenomenon.  Stephen J. Ceci, Donna K. Ginther, Shulamit Kahn, and Wendy M. Williams address these issues in their article \”Women in Academic Science: A Changing Landscape,\” which appears in the December 2014 issue of Psychological Science in the Public Interest. Here\’s a taste of their conclusions:

We conclude by suggesting that although in the past, gender discrimination was an important cause of women’s underrepresentation in scientific academic careers, this claim has continued to be invoked after it has ceased being a valid cause of women’s underrepresentation in math-intensive fields. Consequently, current barriers to women’s full participation in mathematically intensive academic science fields are rooted in pre-college factors and the subsequent likelihood of majoring in these fields, and future research should focus on these barriers rather than misdirecting attention toward historical barriers that no longer account for women’s underrepresentation in academic science.

Here\’s an illustrative figure that helps to illustrate the starting point for Ceci, Ginther, Kahn, and Williams. Each of the points is a STEM department. The authors divide up their analysis into what they call the LPS departments, which are the three in the upper right with more females among those who get a PhD and relatively lower math scores on the GRE exam, and what they call the GEEMP departments, which are the departments to the bottom right with a smaller share of females among those who get a PhD in that field and higher average math scores on the GRE exam. These sorts of differences in PhDs granted to females are reflected in large gaps in the number of female professors in these fields.


Ginther and Kahn are economists, while Ceci and Williams are psychologists. Thus, the paper combines economics-style analysis of career development patterns with psychology-style analysis of how people learn. For economists, a standard approach is to look at the \”pipeline\” to producing tenured professors. For example, you can look at how many females major in STEM subjects in college, and what proportion go on to a PhD, and then to various academic jobs. The idea is to see where there is \”leakage\” in the pipeline–and thus identify the barriers to women professors.  When they carry out this analysis, the authors offer a (to me) surprising conclusion: the LPS fields have a relatively substantial \”leakage\” when comparing how females and males move from undergrad majors to grad school and professorships. But in the GEEMP areas–which includes economics–women and men in recent years proceed from undergrad degrees through grad school and into professorships at similar rates. After reviewing a range of evidence, they write:

Thus, the points of leakage from the STEM pipeline depend on the broad discipline being entered—LPS or GEEMP. By graduation from college, women are overrepresented in LPS majors but far underrepresented in GEEMP fields. In GEEMP fields, by 2011, there was very little difference in women’s and men’s likelihood to advance from a baccalaureate degree to a PhD and then, in turn, to advance to a tenure-track assistant professorship. … [O]nce women are within  GEEMP fields, their progress resembles that of male GEEMP majors. In contrast, whereas far more women than men major in LPS fields, in 2011, the gender difference in the probability of advancing from an LPS baccalaureate degree to a PhD was not trivial, and the gap in the probability of advancing from PhD to assistant professorship was particularly large, with fewer women than men advancing.

The message is that the the most substantial barriers to women in economics and other GEEMP fields arise before college. Why might this be so? One set of explanations focuses on the high scores for boys on a wide range of math tests. The other set of explanations focuses on social expectations about interests and careers. Of course, these explanations become entangled, because getting skills is interrelated with social expectations.

With regard to higher math scores for boys, the paper reviews evidence on how in utero exposure to  androgen hormones is greater for boys, and how certain math-related abilities (like 3D spatial processing) appear to be greater for boys at young ages. I\’ll skip past that evidence here because: i) as the authors note, it\’s far from definitive; ii) I lack any particular competence to evaluate this evidence, anyway. Instead, let me stick to several points that seem well established.

In terms of the basic data from math scores themselves, it used to be true that math test scores for boys were higher than those for girls, but on average, high school girls have now caught up. The authors note: \”However, by the beginning of the 21st century, girls had reached parity with boys—including on the hardest problems on the National Assessment of Educational Progress (NAEP) for high school students.\” It also seems true that at the top of the distribution of math test scores, boys substantially outnumber girls: \”Thus, a number of very-large-scale analyses converged on the conclusion that there are sizable sex differences at the right tail of the math distribution.\” One of many studies they discuss looked at the \”Programme for International Student Assessment data set for the 33 countries that provided data in all waves from 2000 to 2009. They, too, found large sex differences at the right tail: 1.7:1 to 1.9:1 favoring males at the top 5% and 2.3:1 to 2.7:1 favoring males at the top 1%.\”

There is an ongoing nature-vs.-nurture argument about how to interpret these higher math scores at the top. Not only have gender differences in math scores changed over time, but they also \”vary by cohort, nation, within-national ethnic groups, and the form of test used. … Moreover, mathematics is heterogeneous, comprising many different cognitive skills …\” At a minimum, these patterns suggest that gender gaps in test scores are quite sensitive to environmental factors. For example, in Iceland, Singapore, and Indonesia, more girls than boys scored at the top 1% of math tests at certain ages.

Some of the evidence the authors cite on the importance of social environment in affecting math scores comes from a Spring 2010 symposium in the Journal of Economic Perspectives on \”Tests and Gender.\” (Full disclosure: I\’ve been Managing Editor of JEP since its first issue in 1987. All JEP articles back to the first issue are freely available on-line at the journal\’s website.)

For example, in that issue of JEP, Devin G. Pope and Justin R. Sydnor look at \”Geographic Variation in the Gender Differences in Test Scores\” across U.S. states and regions. Here\’s an illustrative finding based on scores from 8th graders on the National Assessment of Educational Progress (NAEP). The vertical axis shows that in every region, the female-male ratio in the top 5% of reading scores is greater than 2, almost reaching 3 in Mountain states. The horizontal axis shows that in every ratio, the male-female ratio in the top 5% of math and science scores ranges from 1.3 in the New England States to 2.2 in the Middle Atlantic states. This finding confirms the fact of a difference in math test scores at the extreme. It also strongly suggests that such differences strongly affected by where you live–and thus are strongly linked to social expectations.

In another paper in the 2010 JEP symposium, Glenn Ellison and Ashley Swanson look at \”The Gender Gap in Secondary School Mathematics at High Achievement Levels: Evidence from the American Mathematics Competitions.\” In a striking finding, the note that most U.S. high school girls who participate in international math competitions come from a very small pool of about 20 high schools. This finding strongly suggests that many other girls, if they were in a different academic setting, would demonstrate high-end math skills. Ellison and Swanson write:

[W]e examine extreme high-achieving students chosen to represent their countries in international competitions. Here, our most striking finding is that the highest-scoring boys and the highest-scoring girls in the United States appear to be drawn from very different pools. Whereas the boys come from a variety of backgrounds, the top-scoring girls are almost exclusively drawn from a remarkably small set of super-elite schools: as many girls come from the 20 schools that generally do best on these contests as from all other high schools in the United States combined. This suggests that almost all American girls with extreme mathematical ability are not developing their mathematical talents to the degree necessary to reach the extreme top percentiles of these contests.

Finally, there is intriguing evidence that a number of women with equivalent math skills may not perform as well in the context of competitive and high-stakes math testing.  In the 2010 JEP symposium, Muriel Niederle  and Lise Vesterlund look at a range evidence on \”Explaining the Gender Gap in Math Test Scores: The Role of Competition.\” I was especially struck by this study:

They examine the performance of women and men in an entry exam to a very selective French business school (HEC) to determine whether the observed gender differences in test scores reflect differential responses to competitive environments rather than differences in skills. The entry exam is very competitive: only about 13 percent of candidates are accepted. Comparing scores from this exam reveals that the performance distribution for males has a higher mean and fatter tails than that for females. This gender gap in performance is then compared both to the outcome of the national high school graduation exam, and for admitted students, to their performance in the first
year. While both of these performances are measured in stressful environments, they are much less competitive than the entry exam. The performance of women is found to dominate that of men, both on the high school exam and during the first year at the business school. Of particular interest is that females from the same cohort of candidates performed signififi cantly better than males on the national high school graduation exam two years prior to sitting for the admission exam. Furthermore, among those admitted to the program they find that within the first year of the M.Sc. program, females outperform males.

A possible reason here is a well-known phenomenon called \”stereotype threat\”–that is, if reminded of a negative stereotype about a group to which you belong before a test, people often perform worse. Here\’s one study that Ceci, Ginther, Kahn, and Williams cite along these lines: \”For example, female test takers who marked the gender box after completing the SAT Advanced Calculus test scored higher than female peers who checked the gender box before starting the test, and this seemingly inconsequential order effect has been estimated to result in as many as 4,700 extra females being
eligible to start college with advanced credit for calculus had they not been asked to think about their gender before completing the test …\”

To recap the argument to this point, the basic question is why women are underrepresented in academic disciplines in certain STEM fields where math scores are higher. For current students, the main underlying reasons seem to trace back to the choices that college students make about undergraduate majors. In turn, a possible explanation is that more males get high scores on pre-college math tests than do women. In turn, a substantial part of this difference seems to trace to social expectations about gender and math, and about gender and test-taking. If more women felt more positive about math before reaching college, then majors in GEEMP areas would presumably tend to rise.

But there is also a different set of arguments about why fewer women sign up for the GEEMP disciplines as undergraduates, which suggests that whole issue of math test scores may be a distraction. For example, it\’s not clear how much the gender difference in math scores at the extreme top end should matter for academia. As the authors point out above, the typical GRE math scores for those in the math-oriented GEEMP fields was about at the 75th percentile–not the top 1%. Another intriguing fact is that women have now been receiving 40-45% of math Ph.D\’s for the last few decades. This alternative view focuses less on math skills and more on perceptions about self and occupation. The Ceci, Ginther, Kahn, and Williams team points out (some citations omitted):

Psychologists have charted large sex differences in occupational interests, with women preferring so-called “people-oriented” (or “organic,” or natural science) fields and men preferring “things” (people- and thing-oriented individuals are also termed “empathizers” and “systematizers,” respectively. This people-versus-things construct … is one of the salient dimensions running through vocational interests; it also represents a difference of 1 standard deviation between men and women in vocational interests. Lippa has repeatedly documented very large sex differences in occupational interests, including in transnational surveys, with men more interested in “thing”-oriented activities and occupations, such as engineering and mechanics, and women more interested in people-oriented occupations, such as nursing, counseling, and elementary school teaching. And in a very extensive meta-analysis of over half a million people, Su, Rounds, and Armstrong (2009) reported a sex difference on this dimension of a full standard deviation.

In other words, the reason that fewer women choose the GEEMP disciplines as undergraduates–and thus the reason that women are underrepresented as faculty in those areas–may be less related to math skills and more related to this distinction between people-oriented and thing-oriented.

In the context of economics, it seems to me true, and also deeply frustrating, that this distinction does capture something about how the field is perceived. Economics is the stuff of life: full of choices that people make about work, consumption, saving, parenthood, and crime, as well as about the structure and decisions of organizations like firms and government that affect people\’s daily lives in profound ways. But the perception that many students have of economics, which is sometimes unfortunately confirmed by how the subject is taught, can lose track of the people, instead viewing the economy as a thing.

How Did Germany Limit Unemployment in the Recession?

Here\’s a puzzle: During the Great Recession, the total contraction in economic output was noticeably larger in Germany than in the United States, but the rise in the unemployment rate was noticeably higher in the United States than in Germany. How did Germany manage it? Shigeru Fujita and Hermann Gartner offer \”A Closer Look at the German Labor Market ‘Miracle’\” in the most recent issue of the Business Review published by the Federal Reserve Bank of Philadelphia, (Q4, 2014, pp. 16-24). 

Let\’s start by stating the puzzle clearly. The top figure shows the change in  unemployment rates for the U.S. and Germany during the recession. The bottom figure shows the fall in real output in each economy.

The authors consider two main alternative explanations for this puzzle, and at least from a U.S. perspective, they come from different ends of the political spectrum. One possible set of explanations is that German unemployment stayed relatively low because of government programs, like the short-time work program that help firms adjust to shorter hours without firing employees.  The other possible set of explanations is that German unemployment stayed relatively low because of earlier labor market reforms that reduced unemployment benefits and kept wages and benefits lower and more flexible, which in turn encouraged a growth of jobs.  Fujita and Gartner argue that the second set of explanation is more plausible. 

Germany does have several government programs that encourage firms to reduce hours when business slows down, rather than firing employees. But Fujita and Gartner argue that these programs have existed in past recessions, and they didn\’t seem to have any particularly large effect in the most recent recession. They write: 

One is the shorttime work program. When employees’ hours are reduced, the participating firm pays wages only for those reduced hours, while the government pays the workers a “short-time allowance” that offsets 60 percent to 67 percent of the forgone earnings. Moreover, the firm’s social insurance contributions on behalf of employees in the program are lowered. In general, a firm can use this program for at most six months. At the beginning of 2009, though, when the slowdown of the economy became apparent, the German government encouraged the use of the program by expanding the maximum eligibility period first to 18 months and then to 24 months and by further reducing the social security contribution rate. The usual eligibility requirements were also relaxed. 

An important thing to remember here is that these special rules had also been applied in past recessions and thus were not so special after all. True, the share of workers in the program increased sharply in 2009, and thus it certainly helped reduce the impact of the Great Recession on German employment. But a more important observation is that even at its peak during the Great Recession, participation in the program was not extraordinary compared with the levels observed in past recessions. Moreover, in previous recessions, the German labor market had responded in a similar manner to the U.S. labor market. 

Another German program that some have credited with staving off high unemployment is the working-time account, which allows employers to increase working hours beyond the standard workweek without immediately paying overtime. Instead, those excess hours are recorded in the working-time account as a surplus. When employers face the need to cut employees’ hours in the future, they can do so without reducing workers’ take-home pay by tapping the surplus account. German firms overall came into the recession with surpluses in these accounts. Thus, qualitatively speaking, this program certainly reduced the need for layoffs. However, less than half of German workers had such an account, and most working-time accounts need to be paid out within a relatively short period — usually within a year or less. According to Michael Burda and Jennifer Hunt, the working-time account program reduced hours per worker by 0.5 percent
in 2008-09, accounting for 17 percent of the total decline in hours per worker in that period.

To understand the allure of the alternative explanation, consider this graph showing the German employment rate in recent decades. Notice that after around 2003, Germany employment starts steadiily rising, and that trend shows only a hiccup during the Great Recession. 
What caused German employment to start rising around 2003? 

We argue that the underlying upward trend was made possible by labor market policies called the Hartz reforms, implemented in 2003-05. … The Hartz reforms are regarded as one of the most important social reforms in modern Germany. The most important change was in the unemployment benefit system. Before the reforms, when workers became jobless, they were eligible to receive benefits equal to 60 percent to 67 percent of their previous wages for 12 to 32 months, depending on their age. When these benefits ended, unemployed workers were eligible to receive 53 percent to 57 percent of their previous wages for an unlimited period. Starting in 2005, the entitlement period was
reduced to 12 months (or 18 months for those over age 54), after which recipients could receive only subsistence payments that depended on their other assets or income sources. Moreover, unemployed workers who refused reasonable job offers faced greater and more frequent sanctions such as cuts in benefits. To further lower labor costs and spur job creation, the size of firms whose employees are covered by unemployment insurance was raised from five to 10 workers. Also, regulation of temporary contract workers was relaxed. Furthermore, starting in 2004, the German Federal Employment Agency and the local employment agencies were reorganized with a stronger focus on returning the unemployed to work and by, for example, outsourcing job placement services to the private sector.

An earlier post from February 14, 2014,  \”A German Employment Miracle Narrative,\” argues that the flexibility of German wages and labor market institutions starting in the mid-1990s started the rise in German employment. In this story, the Hartz reforms take on less importance, but the emphasis of the story is still on greater flexibility in markets, not government programs for sharing hours. Fujita and Gartner make a similar point: \”In other words, in the boom leading up to the Great Recession, wage growth was much more muted than during previous booms, and thus this wage moderation was an important factor in creating the upward trend in employment.\”
A final point from Fujita and Gartner is that the comparison from the U.S. to Germany isn\’t apples-to-apples, because the underlying causes of the recessions were different. Germany didn\’t have a housing bubble; instead, it had an export bust. The incentives for what kind of financial crisis emerges and for laying off workers may be rather different in these two different kinds of recessions. They write: 

The recession in Germany was brought about by a different shock than that which triggered the recession in the U.S. The U.S. economy suffered a decline in domestic demand as the plunge in home values reduced households’ net wealth, whereas Germany had experienced no housing bubble. Instead, the decline in German output was driven by a short-term plunge in world trade. Whether a recession is expected to be short or long-lasting is an important factor in firms’ hiring and firing decisions. If a firm expects a downturn to last only a short period, it may well choose not to cut its work force, even though it faces lower demand, especially if laying off and hiring workers is costly, as it is in Germany. Consistent with this possibility, Burda and Hunt point out anecdotal
evidence that, especially by 2009, German firms were reluctant to lay off their workers because of the difficulty in finding suitable replacements.

Of course, the argument that German unemployment didn\’t rise as much because of reductions in unemployment benefits, low wage growth, and flexible labor markets doesn\’t prove that German innovations like the short-time allowance or working-time accounts are a bad idea. They may still be moderately helpful. But it doesn\’t look like they are the main explanation for Germany\’s success in limiting the rise in unemployment during and after the recession.