Everyone knows that the Great Recession was tangled up with a housing boom that went bust. But more precisely, what was different about housing in the most recent business cycle? Burcu Eyigungor discusses \”Housing’s Role in the Slow Recovery\” in the Spring 2016 issue of Economic Insights, published by the Federal Reserve Bank of Philadelphia.
As a starting point, here\’s private residential fixed investment–basically, spending on home and apartment construction and major renovations–as a share of GDP going back to 1947. Notice that this category of investment falls during every recession (shown by the shaded areas) and then usually starts bouncing back just before the end of the recession–except for the period after 2009.
The most recent residential building cycle looks different. Eyigungor explains:
The housing boom from 1991 to 2005 was the longest uninterrupted expansion of home construction as a share of overall economic output since 1947 (Figure 1). During the 1991 recession, private home construction had constituted 3.5 percent of GDP, and it increased its share of GDP without any major interruptions to 6.7 percent in 2005. This share was the highest it had been since the 1950s. Just like the boom, the bust that followed was also different from earlier episodes. During the bust, private residential investment as a share of GDP fell to levels not seen since 1947 and has stayed low even after the end of the recession in 2009. In previous recessions, the decline in residential construction was not only much less severe, but the recovery in housing also led the recovery in GDP. As Federal Reserve Chair Janet Yellen has pointed out, in the first three years of this past recovery, homebuilding contributed almost zero to GDP growth.
There are two possible categories of reasons for the very low level of residential building since 2009. On the supply side, it may not seem profitable to build, given what was already built back before 2008 and the lower prices. On the demand side, one aftermath of the Great Recession could plausibly be that at least some people are feeling economically shaky and mistrustful of real estate markets. and so not eager to buy.
Both supply and demand presumably played some role. But housing prices have now been rising again for about three years, and the \”vacancy\” rates for owner-occupied housing and rental housing are back to the levels from before the Great Recession. In that sense, it doesn\’t look as if an overhang of empty dwellings or especially low prices are the big problem for the housing market. Instead, Eyigungor argues that the demand side of the housing market is holding back the housing market.
In particular, the demand for housing is tied up with the rate of \”household formation\”–that is, the number of people who are starting new households. The level of household formation was low for years after 2009 (and remember that these low levels are in the context of a larger population than several decades ago, so the rate of household formation would be lower still).
The rates of homeownership have now declined back to levels from the 1980s, and the share of renters has risen. \”This decline has lowered overall housing expenditures, because homeowners on average spend more on housing than renters do because of the tax incentives of homeownership and holding a mortgage. Together, the declines in household formation and homeownership contributed to the decline in residential expenditures as a share of GDP.\”
The first post on this blog went up five years ago, on May 17, 2011: the first three posts are here, here, and here. When it comes to wedding anniversaries, the good folks at Hallmark inform me that a five-year anniversary is traditionally \”wood.\” But I suspect that blogs age faster than marriage. How much faster? There\’s an old and probably unreliable saying that a human year is seven dog-years. But when it comes to blogging, mayfly-years may be a more appropriate metric. The mayfly typically lives for only one day, maybe two. I\’ve put up over 1,300 posts in the last five years, probably averaging roughly 1,000 words in length. Dynasties of mayflies have risen and fallen during the life of this blog.
Writing a blog 4-5 times each week teaches you some things about yourself. I\’ve always been fascinated by the old-time newspaper columnists who churned out 5-6 columns every week, and I wondered if I could do that. In the last five years, I\’ve shown myself that I could. The discipline of writing the blog has been good for me, pushing me to track down and read reports and articles that might otherwise have just flashed across my personal radar screen before disappearing. I\’ve used the blog as a memory aid, so that when I dimly recall having seen a cool graph or read a good report on some subject, I can find it again by searching the blog–which is a lot easier than it used to be to search my office, or my hard drive, or my brain. My job and work-life bring me into contact with all sorts of interesting material that might be of interest to others, and it feels like a useful civic or perhaps even spiritual discipline to shoulder the task of passing such things along.
It\’s also true that writing a regular blog embodies some some less attractive traits: a compulsive need to broadcast one\’s views; an obsession about not letting a few days or a week go by without posting; an egoistic belief that anyone else should care; a need for attention; and a desire to shirk other work. Ah, well. Whenever I learn more about myself, the lesson includes a dose of humility.
The hardest tradeoff in writing this blog is finding windows of time in the interstices of my other work and life commitments, and the related concern that by living in mayfly years, I\’m not spending that time taking a deeper dive into thinking and writing that would turn into essays or books.
In a book published last year, Merton and Waugh: A Monk, A Crusty Old Man, and The Seven Storey Mountain, Mary Frances Coady describes the correspondence between Thomas Merton and Evelyn Waugh in the late 1940s and early 1950s. Merton was a Trappist monk who was writing his autobiographical book The Seven Story Mountain. (Famous opening sentence: \”On the last day of January 1915, under the sign of the Water Bearer, in a year of a great war, and down in the shadow of some French mountains in the borders of Spain, I came into the world.\”) Waugh was already well-known, having published Brideshead Revisited a few years earlier. Merton\’s publisher sent the manuscript to Waugh for evaluation, and Waugh both offered Merton some comments and also ended up as the editor of the English edition.
Waugh sent Merton a copy of a book called The Reader over My Shoulder, by Robert Graves and Alan Hodge, one of those lovely short quirky books of advice to writers that I think is now out of print. Here\’s a snippet from one of the early letters from Waugh to Merton:
With regard to style, it is of course much more laborious to write briefly. Americans, I am sure you will agree, tend to be very long-winded in conversation and your method is conversational. I relish the laconic. … I fiddle away rewriting any sentence six times mostly out of vanity. I don\’t want anything to appear with my name that is not the best I am capable of. You have clearly adopted the opposite opinion … banging away at your typewriter on whatever turns up. …
But you say that one of the motives of your work is to raise money for your house. Well simply as a matter of prudence you are not going the best way about it. In the mere economics of the thing, a better return for labour results in making a few things really well than in making a great number carelessly. You are plainly undertaking far too many trivial tasks for small returns. …
Your superiors, you say, leave you to your own judgment in your literary work. Why not seek to perfect it and leave mass-production alone? Never send off any piece of writing the moment it is finished. Put it aside. Take on something else. Go back to it a month later re-read it. Examine each sentence and ask \”Does this say precisely what I mean? Is it capable of misunderstanding? Have I used a cliche where I could have invented a new and therefore asserting and memorable form? Have I repeated myself and wobbled around the point when I could have fixed the whole thing in six rightly chosen words? Am I using words in their basic meaning or in a loose plebeian way?\” … The English language is incomparably rich and can convey every thought accurately and elegantly. The better the writing the less abstruse it is. … Alas, all this is painfully didactic–but you did ask for advice–there it is.
In all seriousness, this kind of advice makes my heart hurt in my chest. Take the extra time to write briefly? Rewrite sentences six time? Put things away for a month and return to them? Bang away at the keyboard on whatever turns up? Far too many trivial tasks for small returns? Wobbled around the point instead of hunting for six well-chosen words? Many of these blog posts are knocked out an hour before bedtime, and I often don\’t reread even once before clicking on \”Publish.\”
Here are some snippets of Merton\’s response to Waugh:
I cannot tell you how truly happy I am with your letter and the book you sent. In case you think I am exaggerating I can assure you that in a contemplative monastery where people are supposed to see things clearly it sometimes becomes very difficult to see anything straight. It is so terribly easy to get yourself into some kind of a rut in which you distort every issue with your own blind bad habits–for instance rushing to finish a chapter before the bell rings and you will have to go and do something else.
It has been quite humiliating for me to find that my out (from Graves and Hodge) that my own bad habits are the same as those of every other second-rate writer outside the monastery. The same haste, distraction, etc. …. On the whole I think my haste is just as immoral as anyone else\’s and comes from the same selfish desire to get quick results with a small amount of effort. In the end, the whole question is largely an ascetic one! …..
Really I like The Reader Over Your Shoulder very much. In the first place it is amusing. And I like their thesis that we are heading toward a clean, clear kind of prose. Really everything in my nature–and in my vocation, too–demands something like that if I am to go on writing. … You would be shocked to know how much material and spiritual junk can accumulate in the corners of a monastery and in the minds of the monks. You ought to see the pigsty in which I am writing this letter. There are two big crates of some unidentified printed work the monastery wants to sell. About a thousand odd copies of ancient magazines that ought to have been sent to the Little Sisters of the Poor, a dozen atrocious looking armchairs and piano stools that are used in the sanctuary for Pontifical Masses and stored on the back of my neck the rest of the time. Finally I am myself embedded in a small skyscraper of mixed books and magazines in which all kinds of surreal stuff is sitting on top of theology. …
I shall try to keep out of useless small projects that do nothing but cause a distraction and dilute the quality of what I turn out. The big trouble is that in those two hours a day when I get at a typewriter I am always having to do odd jobs and errands and I am getting a lot of letters from strangers, too. These I hope to take care of with a printed slip telling them politely to lay off the poor monk, let the guy pray.
I find myself oddly comforted by the thought that a monastery may be just as cluttered, physically and metaphysically, as an academic office. But I\’m not sure what ultimate lessons to take away from these five-year anniversary thoughts. I don\’t plan to give up the blog, but it would probably be a good idea if I can find the discipline to shift along the quality-quantity tradeoff. Maybe trend toward 3-4 posts per week, instead of 4-5. Look for opportunities to write shorter, rather than longer. Avoid the trivial. Try to free up some time and see what I might be able to accomplish on some alternative writing projects. I know, I know, it\’s like I\’m making New Year\’s resolutions in May. But every now and again, it seems appropriate to share some thoughts about this blogging experience. Tomorrow the blog will return to its regularly scheduled economics programming.
Homage: I ran into part of the Waugh quotation from above in the \”Notable & Quotable\” feature of the Wall Street Journal on May 3, 2016, which encouraged me to track down the book.
A major technological innovation may be arriving in a very old industry: the production of meat. Instead of producing meat by growing animals, meat can instead be grown directly. The process has been happening in laboratories, but some are looking ahead to large-scale production of meat in \”carneries.\”
This technology has a number of implications, but here, I\’ll focus on some recent research on how a shift away from conventionally produced meat to cultured or in vitro meat production could help the environment. Carolyn S. Mattick, Amy E. Landis, Braden R. Allenby, and Nicholas J. Genovese tackle this question in \”Anticipatory Life Cycle Analysis of In Vitro Biomass Cultivation for Cultured Meat Production in the United States,\” published last September in Environmental Science & Technology (2015, v. 49, pp/ 11941−11949). One of the implications of their work is that factory-style meat production may produce real environmental gains for beef, but perhaps not for other meats.
Another complication is that not all production of vegetables has a lower environmental impact than, say, poultry or fresh fish. Michelle S. Tom, Paul S. Fischbeck, and Chris T. Hendrickson provide some evidence on this point in their paper, \”Energy use, blue water footprint, and greenhouse gas emissions for current food consumption patterns and dietary recommendations in the US,\” published in Environment Systems and Decisionsin March 2016 (36:1, pp. 92-103),
As background, one of the first examples of the new meat production technology happened back in 2001, when a team led by bioengineer Morris Benjaminson cut small chunks of muscle from goldfish, and then immersed the chunks in a liquid extracted from the blood of unborn calves that scientists use for growing cells in the lab. The New Scientist described the results this way in 2002:
\”After a week in the vat, the fish chunks had grown by 14 per cent, Benjaminson and his team found. To get some idea whether the new muscle tissue would make acceptable food, they washed it and gave it a quick dip in olive oil flavoured with lemon, garlic and pepper. Then they fried it and showed it to colleagues from other departments. \”We wanted to make sure it\’d pass for something you could buy in the supermarket,\” he says. The results look promising, on the surface at least. \”They said it looked like fish and smelled like fish, but they didn\’t go as far as tasting it,\” says Benjaminson. They weren\’t allowed to in any case–Benjamison will first have to get approval from the US Food and Drug Administration.\”
Mattick, Landis, Allenby, and Genovese an evaluation of environment effects over the full life-cycle of production: for example, this means including the environmental effects of agricultural products used to feed lifestock. They compare existing studies of the environmental effects of traditional production of beef, pork, and poultry with a 2011 study of the environmental effects of in vitro meat production and with their own study. (The 2011 study of in vitro meat production is \”Environmental Impacts of Cultured Meat Production,\” by Hanna L. Tuomisto and M. Joost Teixeira de Mattos, appearing in Environmental Science & Technology, 2011, 45, pp. 6117–6123). They summarize the results of their analysis along four dimensions: industrial energy use, global warming potential, eutrophication potential (that is, addition of chemical nutrients like nitrogen and phosphorus to the ecosystem), and land use.
Here\’s the summary of industrial energy use, which they view as definitely higher for in vitro meat than for pork and poultry, and likely to be higher for beef. They explain:
\”These energy dynamics may be better understood through the analogy of the Industrial Revolution: Just as automobiles and tractors burning fossil fuels replaced the external work done by horses eating hay, in vitro biomass cultivation may similarly substitute industrial processes for the internal, biological work done by animal physiologies. That is, meat production in animals is made possible by internal biological functions (temperature regulation, digestion, oxygenation, nutrient distribution, disease prevention, etc.) fueled by agricultural energy inputs (feed). Producing meat in a bioreactor could mean that these same functions will be performed at the expense of industrial energy, rather than biotic energy. As such, in vitro biomass cultivation could be viewed as a renewed wave of industrialization.\”
With regard to global warming potential, in vitro production of meat is estimated to be lower than beef, but higher than poultry and pork.
The other two dimensions are eutrophication and land use. Eutrophication basically involves effects of fertilizer use, which for traditional meat production involves both agricultural production and disposal of animal waste products. The environmental effects of in vitro meat production are quite low here, as is the effect of in vitro meat production on land use.
Of course, these estimates are hypothetical. No factory-scale production of cultured meat exists yet. But if the \”carnery\” does become a new industry in the next decade or so, these kinds of tradeoffs will be part of the picture.
As I noted above, it jumps out from these figures that traditional production of beef has a much more substantial environmental footprint than production of poultry or pork. In their paper, Tom, Fischbeck, and Hendrickson take on a slightly different question: what is the environmental impact of some alternative diet scenarios: specifically, fewer calories with the same mixture of foods, or same calories with an alternative mixture of foods recommended by the US Department of Agriculture, or both lower calories and the alternative diet. The USDA-recommended diet involves less sugar, fat, and mean, and more fruits, vegetables, and dairy. But counterintuitively (at least for me), they find that the reduced calorie, altered diet choice has larger environmental effects than the current dietary choices. They write:
However, when considering both Caloric reduction and a dietary shift to the USDA recommended food mix, average energy use increases 38 %, average blue water footprint increases 10 %, and average GHG [greenhouse gas] emissions increase 6%.
Why does a shift away from meat and toward fruits and vegetables create larger environmental effects? The authors do a detailed breakdown of the environmental costs of various foods along their three dimensions of energy use, blue water footprint, and greenhouse gas emissions. Here\’s an overall chart. An overall message is that while meat (excluding poultry) is at the top on greenhouse gas emissions, when it comes to energy use and blue water footprint, meat is lower than fruit and vegetables.
As the authors write: \”[T]his study’s results demonstrate how the environmental benefits of reduced meat consumption may be offset by increased consumption of other relatively high impact foods, thereby challenging the notion that reducing meat consumption automatically reduces the environmental footprints of one’s diet. As our results show food consumption behaviors are more complex, and the outcomes more nuanced.\” For a close-up illustration of the theme, here\’s a chart from Peter Whoriskey at the the Washington Post Wonkblog, created based on supplementary materials from the Tom, Fischbeck and Hendrickson paper. A striking finding is that on the dimension of greenhouse gas emissions, beef is similar to lettuce. The greenhouse gas emissions associated with production of poultry are considerably lower than for yogurt, mushrooms, or bell peppers.
Again, the environmental costs of beef in particular are high. If cultured meat could replace production of beef in a substantial way, it might bring overall environmental gains. But making defensible statements about diet and the environment seem to require some nuance. Lumping beef, pork, poultry, shellfish, and other fish all into one category called \”meat\” covers up some big differences, as does lumping all fruits into a single category or all vegetables into a single category.
You\’ve seen those maps where the size of countries or states is distorted larger or smaller according to their population? Here\’s a US map in which the size of counties are adjusted by the value of the economic output produced in that county. It\’s one of those useful maps where you see the world in a different way, by Max Galka at the Metrocosm website.
It\’s also just to look at how the 10 largest US urban areas appear at the end of the animation.
The Summer Olympics starts in Rio de Janiero in August. From the financial point of view of the host city, it\’s very likely to be a money-losing proposition–just like almost all the other Olympic games in recent decades. Robert A. Baade and Victor A. Matheson describe the potential benefits and certain costs facing cities that host an Olympic Games in \”Going for the Gold: The Economics ofthe Olympics,\” in the Spring 2016 issue of the Journal of Economic Perspectives. They write:
\”In this paper, we explore the costs and benefits of hosting the Olympic Games. On the cost side, there are three major categories: general infrastructure such as transportation and housing to accommodate athletes and fans; specific sports infrastructure required for competition venues; and operational costs, including general administration as well as the opening and closing ceremony and security. Three major categories of benefits also exist: the short-run benefits of tourist spending during the Games; the long-run benefits or the “Olympic legacy” which might include improvements in infrastructure and increased trade, foreign investment, or tourism after the Games; and intangible benefits such as the “feel-good effect” or civic pride. Each of these costs and benefits will be addressed in turn, but the overwhelming conclusion is that in most cases the Olympics are a money-losing proposition for host cities; they result in positive net benefits only under very specific and unusual circumstances. Furthermore, the cost–benefit proposition is worse for cities in developing countries than for those in the industrialized world.\”
Consider the costs in a bit more detail. \”The International Olympic Committee requires that the host city for the Summer Games have a minimum of 40,000 hotel rooms available for spectators and an Olympic Village capable of housing 15,000 athletes and officials..\” For example, Rio had 25,000 hotel rooms, and thus committed to build an additional 15,000. Not surprisingly, lots of hotels typically go broke after a Games. Next come facilities themselves:
\”Even modern cities in high-income countries may need to build or expand an existing velodrome, natatorium, ski-jumping complex, or speed skating oval. Furthermore, modern football and soccer stadiums are generally incompatible with a full-size Olympic track, because including space for such a track would cause an undesirably large separation between the fans and the playing field. For this reason, Boston’s failed bid to host the 2024 Summer Games had proposed $400 million to build an entirely new stadium for the track and field events, despite the presence of four large existing outdoor sports stadiums in the area.\”
Then add running the Games themselves. Security costs alone for a Summer Olympics has been running at $1.6 billion. How does this compare to revenues received by the Organizing Committee for the Game? Here are some illustrative estimates from the 2010 Winter Games in Vancouver and the 2012 Summer Olympics in London. The revenues don\’t cover even one-third of the costs.
Thus, the economic case for hosting an Olympics must rely on lots and lots of spinoff benefits: construction jobs, tourist spending during the Games, a legacy of lasting infrastructure, improved recognition for the city that could help the city after the Games. But except in a few cases, these benefits turn out to shrink in size the more you look at them.
Sure, tourists come for an Olympics. But other tourists who want to avoid the Olympics–and locals, too–leave town. The number of international visitors to London during the months of those 2012 Summer Olympics was actually lower than the previous year, and Beijing saw a fall in international tourists in 2008 compared to the same months the previous year. Those extra hotel rooms do raise profits of the tourism industry, which is a boost to their shareholders and out-of-town management. Systematic studies show the Games offer a very small boost to the local economy when they are happening, but not a lasting gain. That promise of lasting infrastructure often ends up looking like a white elephant.
Many of the venues from the Athens Games in 2004 have fallen into disrepair. Beijing’s iconic “Bird’s Nest” Stadium has rarely been used since 2008 and has been partially converted into apartments, while the swimming facility next door dubbed the “Water Cube” was repurposed as an indoor water park at a cost exceeding $50 million (Farrar 2010). The Stadium at Queen Elizabeth Olympic Park in London, the site for most of the track and field events as well as the opening and closing ceremonies in 2012, was designed to be converted into a soccer stadium for local club West Ham United in order to avoid the “white elephant” problem. Before the Games, the stadium had an original price tag of £280 million. Cost overruns led to a final construction cost of £429 million, and then the conversion cost to remove the track and prepare the facility to accommodate soccer matches topped £272 million, of which the local club is paying only £15 million …
The idea that an Olympics will raise the long-term visibility of a city works now and then: arguably it worked for the 2002 Winter Games in Salt Lake City and for the Summer Games in Barcelona in 1992. But more often, the city is either already just about as well-known as it is likely to be (London, Beijing, Rio) or it is small or remote enough that it doesn\’t benefit from greater long-term visibility (Lillehammer, Calgary).
The International Olympic Committee has professed to be concerned about the rising costs of the Games. I\’m skeptical about the depth of this concern. But if the concern is real, then the IOC will look at future bids differently. For example, it will encourage cities with lots of existing facilities that could be used for the Games, and discourage the building of fancy, costly new structures or hosting spectacularly expensive opening and closing ceremonies. The IOC might choose a limited number of cities and rotate the Games between them, or have the Games happen in the same city twice in succession. It\’s a crazy idea, I know, but maybe the focus could shift back from promotionalism to the actual athletes and events.
How do you set up a committee to have the highest chance of reaching a wise conclusion? Warsh suggests that a combination of high-quality inputs, genuine deliberation, and optimal committee design all play a role. Otherwise, outcomes can fall short. Warsh writes:
The literature identifies numerous interrelated theories that link internal management inadequacies to organizational failure. These include: • Janis’s canonical Groupthink theory (1972, 1982), which highlights the tendency of small, homogenous management teams to make suboptimal decisions; • Hambrick and Mason’s Upper Echelon theory (1984), which links organizational achievements to the composition and background of an organization’s senior management team; • Staw, Sandelands, and Dutton’s Threat Rigidity Effect theory (1981), which explains the tendency of management groups to stick rigidly to tried and tested techniques at times of threat and challenge, thereby increasing the risk of organizational failure among incumbents at times of secular change.
Does the Fed Open Market Committee operate in a way that seems likely to gain the benefits of committees and minimize the costs? Here\’s some basic background comparing the Fed to policy decision-making committees at other central banks.
Warsh writes: \”The FOMC’s institutional design is not inconsistent with sound practice. But there are certain institutional aspects of the FOMC which differ somewhat from best practice, at least as identified in the literature.\” Here are some examples of what he has in mind.
Successful committee don\’t have too many participants. \”By statute, the FOMC includes twelve voting members. … Policy deliberations, however, occur in a much larger institutional setting. Nineteen people convene in the discussion (voters and non-voters alike) and a total of about sixty people are in attendance, including a range of subject-matter experts on key aspects of the economic and financial landscape.\”
The members of successful committees have independent information that they can bring to bear. \”While the Reserve Bank presidents are supported by large,independent staff s of economists to help inform their forecasts and policy judgments, I would note that the economic models and forecasting tools are substantially similar across the Federal Reserve System. Th is explains, in part, the remarkable conformity of the so-called dot plots in the projections from FOMC participants.\”
A lack of disagreement suggests an insufficient breadth of views. W\”One simple mechanism for evaluating the breadth of views is to review trends in dissent: that is, the number of FOMC members who voted against the majority policy stance. By both FOMC tradition and practice, the bar for lodging a dissenting vote is high. Neither Chairman Greenspan nor Chairman Bernanke ever cast a vote in the minority. In contrast, the governor of the Bank of England was outvoted on nine occasions since 1997. And governors of the Federal Reserve, unlike Reserve Bank presidents, only rarely dissented in casting of votes. In the past decade, for example, there has been only one instance of dissent by a sitting governor.\”
In successful committees, people are willing to express their unvarnished opinions. But since the Fed started publishing transcripts of its meetings in 1993, albeit with a lag, \”Meade and Stasavage (2008) find evidence that the Fed’s post-1993 transcript policy led to deterioration in the quality of FOMC deliberations. In the authors’ formulation, policymakers are motivated to achieve two goals in the policymaking process: making optimal policy decisions and garnering a good reputation in public (often associated with conformity with the prevailing consensus). The existence of public transcripts, even with a lag, caused FOMC participants to voice less dissent in the meetings themselves and to be less willing to change policy positions over time. For example, the number of dissenting opinions expressed by voting members fell from forty-eight (between 1989 and 1992) to twenty-seven (between 1994 and 1997).\”
In his comments on the Warsh paper, Peter Fisher, who spent a number of years at the New York Fed in the 1980s and 1990s, summarized what he viewed as Warsh\’s message in this way:
\”I had almost ten years at the FOMC table … I thought I understood the awkwardness of group accountability when more than once I saw the FOMC gravitate toward no one’s first choice and virtually no one’s second choice, and we ended up with third-best outcomes. But now I’m also worried about individual accountability of a pseudo-nature, which I’m afraid is the regime we now have …\”
We know several facts about US life expectancy with a high degree of confidence. Overall, life expectancy is rising: indeed, it is rising for every age group. However, there have long been gaps in life expectancy between various groups, like those with higher and lower income levels. What we don\’t know about life expectancy with a high degree of confidence, and are still sorting out, is how the changes in life expectancy across groups are evolving. Janet Currie and Hannes Schwandt tackle this question in \”Mortality Inequality: The Good News from a County-Level Approach,\” published in the Spring 2016 issue of the Journal of Economic Perspectives.
Currie and Schwandt provide this quick overview of recent studies:
However, this overall decline in mortality rates has been accompanied by prominent recent studies highlighting that the gains have not been distributed equally (for example, Cutler, Lange, Meara, Richards-Shubik, and Ruhm 2011; Chetty et al. 2015; National Academies of Science, Engineering, and Medicine (NAS) 2015; Case and Deaton 2015). Indeed, several studies argue that when measured across educational groups and/or geographic areas, mortality gaps are not only widening, but that for some US groups, overall life expectancy is even falling (Olshansky et al. 2012; Wang, Schumacher, Levitz, Mokdad, and Murray 2013; Murray et al. 2006). It seems to have become widely accepted that inequality in life expectancy is increasing.
But Currie and Schwandt marshall evidence that is arguably more representative than what has been used in the previous studies to argue that in many dimensions, the inequality of life expectancy across various groups is either declining or not changing by much.
One of the issues here is that many sources of data on mortality rates don\’t include measures of income. As an example of the issues here, I discussed the NAS study about growing inequality of life expectancy by income on this blog last September. However, that study (and several others) are based on the Health and Retirement Survey, which looks only at people over age 50. Indeed, every six years it chooses a sample of 50 year-olds and then tracks that sample every two years. It\’s wonderful data for studying certain choices of this population, and it does include mortality rates and past income. But the entire sample is 20,000 people over the age of 50, so it\’s tricky to project back to life expectancies at earlier ages and to do detailed comparisons across groups. The authors write: \”For example, Chetty et al. (2015) use mortality at age 40 to 63 to estimate income-specific trends in life expectancy, while NAS (2015) uses mortality at age 50 to 78, an approach that by construction does not consider developments at younger ages.\”
Those who want more detail on difficulties comparing mortality rates by group can go through the Currie and Schwandt article. Here, I want to focus on the county-level approach that they take. Detailed data on mortality rates by age, gender, race is available at the county level. We also have census data on poverty rates by county, median income by county, and other variables. The authors take three years: 1990, 2000, and 2010. They rank the counties in that year according to a certain factor–like poverty rates–and then look at how life expectancies varied across countries. By doing this exercise in 1990, 2000, and 2010, they can look at how inequality of life expectancy is varying over time.
Here\’s a sample of the results. Each pair of figures is divide up by age group and by gender. Focus on the upper-left figure for a moment. The blue triangles show the county-level mortality rate for females age 0-4 in counties ranging from the lowest poverty rates up to the highest percentile of poverty rates. To be specific there are 20 blue triangles, one for each five percentiles of the poverty distribution (that is, percentiles 1-5, 6-10, and so on up to 96-100). The blue line is a best-fit line for the blue triangles, and the upward slope shows that counties with higher poverty also have higher mortality rates. The green circles show estimates across counties for the year 2010, and the green line is a best-fit line for 2010. The dashed line is a best-fit line for 2000, but to keep the figures from getting to messy, the separate points aren\’t shown for 2000. Overall mortality rates fell from 1990 to 2010. Also, the slope of the green line is flatter than the slope of the blue line, which means that females in the 0-4 age group in high-poverty countries had closed the mortality gap to some extent with those in low-poverty countries.
When you look across these kinds of figures, you see variation across age groups in the size of the mortality decline: that is, the gap between the green and blue lines. You also see differences in how the slope of the mortality lines has changed: for the youngest age groups (which remember were not well-represented in the data from earlier studies), inequality of mortality has decreased, but for women in the 50+ age group, the level of mortality has dropped (green line lower than blue line) but the inequality of mortality is greater (green line steeper than blue line).
If you know anything about academic researchers, you can imagine how Currie and Schwandt work through this kind of data with care and attention. For example, you can rank counties in other ways, like median income or high school dropouts or even the starting level of life expectancy. You can break down the mortality rates by race/ethnicity as well as by age and gender. Here are a few of the conclusions that emerge.
1) \”We find that inequality in mortality has fallen greatly among children. It is worth emphasizing that the reductions in mortality among African Americans, especially African-American males of all ages, are stunning and that is a major driver of the overall positive picture. This positive finding has been largely neglected in much of the discussion of overall mortality trends.\” The authors point out that good health earlier in life is somewhat predictive of good health later in life, so improved mortality among children is good news in both the short run and the long run.
2) The changes in inequality of mortality aren\’t being primarily driven by the rise of income inequality. There are lots of population segments where income inequality is up an mortality inequality is down. Moreover, as the authors point out, even the direction of causality from income inequality to health problems is questionable; instead, the direction of causality is often that poor health status leads to lower income levels. The drivers of mortality changes across groups probably have more to do with the rate of change in habits (smoking, obesity, perhaps opioids), as well as public policy changes over the last two decades affecting nutrition and access to health care.
3) \”Although our overall message is more positive than some earlier studies, we do find an alarming stagnation in mortality among white women aged 20 to 49.\”
4) Changes in smoking rates are probably part of the story here. Men had higher smoking rates than women, but then decreased those smoking rates more dramatically. This pattern probably explains why mortality gains among older women look relatively small (and even negative for some specific groups), and may also explain the greater inequality of mortality rates across counties by poverty level.
Here\’s how Currie and Schwandt conclude:
\”It sometimes seems as if the research literature on mortality is compelled in some way to emphasize a negative message, either about a group that is doing less well or about some aspect of inequality that is rising. … We believe that a balanced approach to the mortality evidence, which recognizes real progress as well as areas in need of improvement, is more likely to result in sensible policymaking. After all, emphasizing the negative could send the message that “nothing works,” especially in the face of seemingly relentless increases in income inequality. We have emphasized considerable heterogeneity in the evolution of mortality inequality by age, gender, and race. Going forward, identifying social policies that have helped the poor and reduced mortality inequality is an important direction for future research. Similarly, understanding the reasons that some groups and age ranges have seen stagnant mortality rates will be important for mobilizing efforts to reduce inequality in mortality and improve the health of the poor.\”
Political polarization refers to the phenomenon that more people are self-identifying as being at one end or the other of the political spectrum, with fewer in-between. Jacob Westfall, Leaf Van Boven, John R. Chambers, and Charles M. Judd argue that the rise in polarization is real, but also that people have an exaggerated perception of the extent of polarization, in \”The Perceiving Political Polarization in the United States: Party Identity Strength and Attitude Extremity Exacerbate the Perceived Partisan Divide.\” It was published in Perspectives on Psychological Science (2015, 10:2, pp. 145-158). (The journal isn\’t freely available online, but many readers may have access through library subscriptions.
Here\’s a figure that, with some explanation, tells their story. The bottom line shows a measure of actual polarization, which seems to have started rising in the 1990s. The top line shows the perception of polarization. The perception is consistently above the reality, and it\’s also more variable–for example, showing jumps in the 1980s and 1990s.
The definition of \”polarization\” here is based on opinion survey data from the ongoing American National Election Study, which started back in 1968. Part of the survey asks people a set of questions about their beliefs on various issues. Here\’s an example of the kind of question a respondent would read.
Some people think the government should provide fewer services, even in areas such as health and education, in order to reduce spending. Suppose these people are at one end of a scale, at point 1. Other people feel that it is important for the government to provide many more services even if it means an increase in spending. Suppose these people are at the other end, at point 7. And of course, some other people have opinions somewhere in between, at points 2, 3, 4, 5, or 6.
Other questions are about rights of the accused, defense spending, guaranteed jobs and income, urban unrest, and others. \”For each issue, the option at one end of the scale represents a stereotypic liberal response, whereas the option at the other end of the scale represents a stereotypic conservative response.\”
Based on this survey data, the basic measure of political polarization in the figure above is straightforward: average the scores for self-identified Republicans, average the scores for self-identified Democrats, and subtract to get the gap between the two. However, respondents to the survey were also asked about how they perceived the attitudes of the Democratic and Republican parties and their presidential candidates. From this data, you can calculate how self-identified Democrats perceive Republicans and how self-identified Republicans perceive Democrats, and compare the perception to the reality.
(It\’s worth a pause here to note that studies of this kind have some obvious weaknesses. They don\’t ask identical questions every year: for example, there were questions on school busing in the 1970s and on cooperation with the USSR in the 1980s. In addition, it\’s possible that the mixture of people who self-identify as Republicans or Democrats has changed in some subtle ways over time. As with all social science studies and survey data, proceed with caution.)
What factors help to explain why the perceptions of polarization are consistently higher than the actuality? Westfall, Van Boven, Chambers, and Judd suggest several interrelated reasons. One is that the very act of categorizing an \”other\” party leads to what is sometimes called \”groupiness,\” where people start to exaggerate characteristics of the other group. \”Analysis of the ANES reveal that both Democrats and Republicans see the other group as more polarized than their own group … Independents … perceive the stances of both the parties and the presidential candidates as being closer together than do the partisan respondents on either side.\” Indeed, an ironic but totally understandable phenomenon emerges here: those who are most polarized are also most likely to perceive a high degree of polarization. Those who identify themselves as a \”strong Democrat\” or a \”strong Republican\” are most likely to see the other party as showing strong polarization. Those who have the most polarized beliefs themselves on the opinion scale are also most likely to believe that others are extremely polarized.
These findings ring true to me. Polarization does seem to be up. But those who bemoan it most loudly, on both sides of the political spectrum, are often bemoaning only how the other side has become polarized–not their own party or themselves personally.
Earl Pomeroy and Jim McCrery are both former Congressmen who in the past headed the Social Security Subcommittee of the House Ways and Means Committee. Pomeroy is a Democrat from North Dakota and McCrery is a Republican from Louisiana. They have co-chaired something called the SSDI Solutions Initiative on behalf of the Committee for a Responsible Federal Budget, which is a nonpartisan Washington DC public policy organization that\’s been around since the 1980s. The group has now produced a volume of essays called SSDI Solutions: Ideas to Strengthen theSocial Security Disability Insurance Program, and chapters can be downloaded individually online.
If you want to get up to speed on the current status and issues facing Disability Insurance, along with a range of proposals for improving it, this volume should be your go-to starting point. Just to skim through the basics, which Patricia Owens lays out in detail in her essay, the share of working-age men receiving DI rose from 3.0% in the late 1970s to 4.5% in 2013; for women, the DI rate went from 1.4% in the late 1970s to 4.3% by 2013.
In the late 1970s, 3.0 percent of working-age men received SSDI, increasing to 3.8 percent before the recession and 4.5 percent by 2013, while the corresponding SSDI receipt rates for working-age women went from 1.4 percent, to 3.5 percent, and 4.3 percent, respectively. A number of economic, demographic, and policy factors went into this change. For example, back in the 1970s a lower percentage of working-age women had been in the (paid) workforce in a way that made them eligible for DI in the first place.
The revenue for DI is part of the payroll tax that also funds Social Security and portions of Medicare. There is a separate DI trust fund, which was about to run out of money this year before a last-minute short-term fix in the Bipartisan Budget Act of 2015, which reallocated some of the rest of the payroll tax over to the DI trust fund, and should keep the fund solvent until 2022. But more substantial changes are needed. Here\’s a figure from the Owens paper showing the long-run projections of revenues and costs for DI. Clearly, there\’s a gap to be closed.
One perplexing bit of evidence is that although SSDI is a national program, and thus in theory applies to everyone in the country equally, there are some dramatic differences in how the program applies across the country. For example, the share of the population receiving DI can vary across states. Here\’s an illustrative figure from the Owens paper. DI rates are the lowest in Alaska and Hawaii, and tend to be
Some of this difference seems to reflect not differences in the characteristics of those receiving disability, but rather differences in the process for granting disability benefits across state. As one example from the overview paper by Pomeroy and Jim McCrery, some of the Administrative Law Judges who hear appeals about whether someone should receive DI are very likely to grant those appeals, while others are not: \”There also appears to be considerable (albeit shrinking) decisional inconsistency among judges. In 2010, for example, one ALJ in Texas approved only 9 percent of applications for benefits while another in Tennessee approved 99 percent …\”
I found myself especially interested in the international comparisons chapter by Robert Haveman: \”Approaches to Assisting Working-Aged People with Disabilities: Insights from Around the World.\” A number of other high-income countries have experienced rising levels of disability and rising program costs, and some of those countries enacted substantive reforms in the last two decades, which may have lessons for US policymakers. For a quick sense of some of the differences, check this table from Haveman\’s
Of this comparison group, the share of the population receiving disability payments looks especially high in Sweden and Netherlands (notice that more than one-fifth of all Swedes in the 50-64 age bracket are receiving disability benefits), and quite low in Germany, with the US in the middle. When it comes to annual growth in the recipiency of disability in the last few decades,the US leads the way.
What are some of the changes in other countries? Haveman offers an overview of the countries in the table and other high-income countries, and suggests this summary:
\”Across the countries studied, a number of options to better manage the disability pension caseload—aside from reforms to the public sickness benefit programs—have been pursued; these include:
The introduction of more stringent vocational criteria into the eligibility determination process, e.g., in determining ability to work, moving from reference to jobs in the worker’s own occupation or jobs for which the worker has been trained to all jobs in the economy.
The centralization of disability assessment. Instead of relying on an applicant’s own doctors, responsibility for assessing capability has been assigned to government agencies. The goal is to make medical assessments more objective and consistent over applicants.
Increasing the emphasis on work capacity itself relative to medical conditions in the eligibility determination process. (For example, in the US system, a physical and mental Listing of Impairments has been established to identify conditions considered sufficiently severe to prevent an individual from performing any gainful activity.)
Changing the emphasis in the disability pension program toward a “rehabilitation before benefits” model involving the requirement that benefit applicants have undertaken rehabilitation efforts, as well as requiring employers to pursue workplace accommodation.
Substituting for the current uniform payroll tax obligations an arrangement in which employer contributions to social insurance depend upon the number of their workers that apply for disability benefits (“experience rating”).
Limiting the duration of disability pension payments to a fixed period (say, three years), with the need to reapply and reestablish eligibility after that period in order to continue benefit receipt.
Increasing work incentives for benefit recipients through wage or employment subsidies or disregarding earnings in calculating benefits for recipients who combine work and disability benefit receipt.\”
Not coincidentally, the bulk of the Pomeroy/McCrery volume involves a range of proposals by various authors that could improve the ability of the US SSDI program to meet its goal of helping the disabled and also help to save money: early intervention proposals to help people stay in the workforce rather than ending up on DI rolls; improve program administration so that determinations of who is eligible to receive DI can be faster and more accurate; help DI work better with other programs often related disability, like worker\’s compensation and long-term care programs; and structural reforms, like changing the rules so that disability is not all-or-nothing, forever, but it could where appropriate become simpler for some of those who experience disability to return to the workforce, or the possibility of allowing a combination of partial disability and part-time work. Here\’s the full Table of Contents with links to PDFs of all the chapters. Physical copies of the book can be bought on Amazon.com.
Section I: Introduction 1. \”Seizing the Opportunity: Ideas for Improving Disability Programs,\” by Co-Chairs Jim McCrery and Earl Pomeroy Full Chapter
2. \”An Overview of Social Security Disability Insurance (SSDI),\” by by Patricia Owens Full Chapter Appendix
Section II: Early Intervention 3. \”The Employment/Eligibility Service System: A New Gateway for Employment Supports and Social Security Disability Benefits,\” by David Stapleton, Yonatan Ben-Shalom, and David Mann Full Chapter Summary Slides Appendix
5. \”Using Transitional Jobs to Increase Employment of SSDI Applicants and Beneficiaries,\” by Julie Kerksick, David Riemer, and Conor Williams Full Chapter Summary Slides
Discussion of Early Intervention Proposals by Lisa D. Ekman Full Discussion
Section III: Program Administration
6. \”Data-Driven Solutions for Improving the Continuing Disability Review Process,\” by Alexandra Constantin, Julia Porcino, John Collins, and Chunxiao Zhou Full Chapter Summary Slides
7. \”Social Security Disability Adjudicative Reform: Ending the Reconsideration Stage of SSDI Adjudication after Sixteen Years of Testing and Enhancing Initial Stage Record Development,\” by Jon C. Dubin Full Chapter Summary Slides
Discussion of Program Administration Proposals by Margaret Malone Full Discussion
Section IV: Interaction with Other Programs 9. \”Expanding Disability Insurance Coverage to Help the SSDI Program,\” by David F. Babbel and Mark F. Meyer Full Chapter Summary Slides
10. \”Ensuring Access to Long-Term Services and Supports for People with Disabilities and Chronic Conditions,\” by Mark Perriello Full Chapter
11. \”Improving the Interaction Between the SSDI and Workers\’ Compensation Programs,\” by John F. Burton Jr. and Xuguang (Steve) Guo Full Chapter Summary Slides
Discussion of Interaction with Other Programs Proposals by David Wittenburg Full Discussion
Section V: Structural Reforms 12. \”Transitional Benefits for a Subset of the Social Security Disability Insurance Population,\” by Kim Hildred, Pamela Mazerski, Harold J. Krent, and Jennifer Christian Full Chapter Summary Slides
13. \”Beyond All or Nothing: Reforming Social Security Disability Insurance to Encourage Work and Wealth,\” by Jason J. Fichtner and Jason S. Seligman Full Chapter Summary Slides
14. \”Exploring an Alternative Social Security Definition of Disability,\” by Neil Jacobson, Aya Aghabi, Barbara Butz, and Anita Aaron Full Chapter Summary Slides
For about 30 years now, my actual paid job (as opposed to my blogging hobby) has been Managing Editor of the Journal of Economic Perspectives. The journal is published by the American Economic Association, which back in 2011 made the decision–much to my delight–that the journal would be freely available on-line, from the current issue back to the first issue in 1987. Here, I\’ll start with Table of Contents for the just-released Spring 2016 issue. Below are abstracts and direct links for all of the papers. I will almost certainly blog about some of the individual papers in the next week or two, as well.
Symposium on Inequality Beyond Income
\”Consumption Inequality,\” by Orazio P. Attanasio and Luigi Pistaferri
In this essay, we discuss the importance of consumption inequality in the debate concerning the measurement of disparities in economic well-being. We summarize the advantages and disadvantages of using consumption as opposed to income for measuring trends in economic well-being. We critically evaluate the available evidence on these trends, and in particular discuss how the literature has evolved in its assessment of whether consumption inequality has grown as much as or less than income inequality. We provide some novel evidence on three relatively unexplored themes: inequality in different spending components, inequality in leisure time, and intergenerational consumption mobility. Full-Text Access | Supplementary Materials
\”Mortality Inequality: The Good News from a County-Level Approach,\” by Janet Currie and Hannes Schwandt In this essay, we ask whether the distributions of life expectancy and mortality have become generally more unequal, as many seem to believe, and we report some good news. Focusing on groups of counties ranked by their poverty rates, we show that gains in life expectancy at birth have actually been relatively equally distributed between rich and poor areas. Analysts who have concluded that inequality in life expectancy is increasing have generally focused on life expectancy at age 40 to 50. This observation suggests that it is important to examine trends in mortality for younger and older ages separately. Turning to an analysis of age-specific mortality rates, we show that among adults age 50 and over, mortality has declined more quickly in richer areas than in poorer ones, resulting in increased inequality in mortality. This finding is consistent with previous research on the subject. However, among children, mortality has been falling more quickly in poorer areas with the result that inequality in mortality has fallen substantially over time. We also show that there have been stunning declines in mortality rates for African Americans between 1990 and 2010, especially for black men. Finally we offer some hypotheses about causes for the results we see, including a discussion of differential smoking patterns by age and socioeconomic status. Full-Text Access | Supplementary Materials
\”Health Insurance and Income Inequality,\” by Robert Kaestner and Darren Lubotsky Health insurance and other in-kind forms of compensation and government benefits are typically not included in measures of income and analyses of inequality. This omission is important. Given the large and growing cost of health care in the United States and the presence of large government health insurance programs such as Medicaid and Medicare, it is crucial to understand how health insurance and related public policies contribute to measured economic well-being and inequality. Our paper assesses the effect on inequality of the primary government programs that affect health insurance. Full-Text Access | Supplementary Materials
\”Family Inequality: Diverging Patterns in Marriage, Cohabitation, and Childbearing,\” by Shelly Lundberg, Robert A. Pollak and Jenna Stearns Popular discussions of changes in American families over the past 60 years have revolved around the \”retreat from marriage.\” Concern has focused on increasing levels of nonmarital childbearing, as well as falling marriage rates that stem from both increases in the age at first marriage and greater marital instability. Often lost in these discussions is the fact that the decline of marriage has coincided with a rise in cohabitation. Many \”single\” Americans now live with a domestic partner and a substantial fraction of \”single\” mothers are cohabiting, often with the child\’s father. The share of women who have ever cohabited has nearly doubled over the past 25 years, and the majority of nonmarital births now occur to cohabiting rather than to unpartnered mothers at all levels of education. The emergence of cohabitation as an alternative to marriage has been a key feature of the post–World War II transformation of the American family. These changes in the patterns and trajectories of family structure have a strong socioeconomic gradient. The important divide is between college graduates and others: individuals who have attended college but do not have a four-year degree have family patterns and trajectories that are very similar to those of high school graduates. Full-Text Access | Supplementary Materials
\”Crime, the Criminal Justice System, and Socioeconomic Inequality,\” by Magnus Lofstrom and Steven Raphael Crime rates in the United States have declined to historical lows since the early 1990s. Prison and jail incarceration rates as well as community correctional populations have increased greatly since the mid-1970s. Both of these developments have disproportionately impacted poor and minority communities. In this paper, we document these trends. We then assess whether the crime declines can be attributed to the massive expansion of the US criminal justice system. We argue that the crime rate is certainly lower as a result of this expansion and in the early 1990s was likely a third lower than what it would have been absent changes in sentencing practices in the 1980s. However, there is little evidence that further stiffening of sentences during the 1990s—a period when prison and other correctional populations expanded rapidly—have had an impact. Hence, the growth in criminal justice populations since 1990s has exacerbated socioeconomic inequality in the United States without generating much benefit in terms of lower crime rates. Full-Text Access | Supplementary Materials
\”Net Neutrality: A Fast Lane to Understanding the Trade-Offs,\” by Shane Greenstein, Martin Peitz and Tommaso Valletti The last decade has seen a strident public debate about the principle of \”net neutrality.\” The economic literature has focused on two definitions of net neutrality. The most basic definition of net neutrality is to prohibit payments from content providers to internet service providers; this situation we refer to as a one-sided pricing model, in contrast with a two-sided pricing model in which such payments are permitted. Net neutrality may also be defined as prohibiting prioritization of traffic, with or without compensation. The research program then is to explore how a net neutrality rule would alter the distribution of rents and the efficiency of outcomes. After describing the features of the modern internet and introducing the key players, (internet service providers, content providers, and customers), we summarize insights from some models of the treatment of internet traffic, framing issues in terms of the positive economic factors at work. Our survey provides little support for the bold and simplistic claims of the most vociferous supporters and detractors of net neutrality. The economic consequences of such policies depend crucially on the precise policy choice and how it is implemented. The consequences further depend on how long-run economic trade-offs play out; for some of them, there is relevant experience in other industries to draw upon, but for others there is no experience and no consensus forecast. Full-Text Access | Supplementary Materials
\”The Billion Prices Project: Using Online Prices for Measurement and Research,\” by Alberto Cavallo and Roberto Rigobon A large and growing share of retail prices all over the world are posted online on the websites of retailers. This is a massive and (until recently) untapped source of retail price information. Our objective with the Billion Prices Project, created at MIT in 2008, is to experiment with these new sources of information to improve the computation of traditional economic indicators, starting with the Consumer Price Index. We also seek to understand whether online prices have distinct dynamics, their advantages and disadvantages, and whether they can serve as reliable source of information for economic research. The word \”billion\” in Billion Prices Project was simply meant to express our desire to collect a massive amount of prices, though we in fact reached that number of observations in less than two years. By 2010, we were collecting 5 million prices every day from over 300 retailers in 50 countries. We describe the methodology used to compute online price indexes and show how they co-move with consumer price indexes in most countries. We also use our price data to study price stickiness, and to investigate the \”law of one price\” in international economics. Finally we describe how the Billion Prices Project data are publicly shared and discuss why data collection is an important endeavor that macro- and international economists should pursue more often. Full-Text Access | Supplementary Materials
\”The Masking of the Decline in Manufacturing Employment by the Housing Bubble,\” by Kerwin Kofi Charles, Erik Hurst and Matthew J. Notowidigdo The employment-to-population ratio among prime-aged adults aged 25–54 has fallen substantially since 2000. The explanations proposed for the decline in the employment-to-population ratio have been of two broad types. One set of explanations emphasizes cyclical factors associated with the recession; the second set of explanations focuses on the role of longer-run structural factors. In this paper, we argue that while the decline in manufacturing and the consequent reduction in demand for less-educated workers put downward pressure on their employment rates in the pre-recession 2000–2006 period, the increased demand for less-educated workers because of the housing boom was simultaneously pushing their employment rates upwards. For a few years, the housing boom served to \”mask\” the labor market effects of manufacturing decline for less-educated workers. When the housing market collapsed in 2007, there was a large, immediate decline in employment among these workers, who faced not only the sudden disappearance of jobs related to the housing boom, but also the fact that manufacturing\’s steady decline during the early 2000s left them with many fewer opportunities in that sector than had existed at the start of the decade. Full-Text Access | Supplementary Materials
\”Going for the Gold: The Economics of the Olympics,\” by Robert A. Baade and Victor A. Matheson In this paper, we explore the costs and benefits of hosting the Olympic Games. On the cost side, there are three major categories: general infrastructure such as transportation and housing to accommodate athletes and fans; specific sports infrastructure required for competition venues; and operational costs, including general administration as well as the opening and closing ceremony and security. Three major categories of benefits also exist: the short-run benefits of tourist spending during the Games; the long-run benefits or the \”Olympic legacy\” which might include improvements in infrastructure and increased trade, foreign investment, or tourism after the Games; and intangible benefits such as the \”feel-good effect\” or civic pride. Each of these costs and benefits will be addressed in turn, but the overwhelming conclusion is that in most cases the Olympics are a money-losing proposition for host cities; they result in positive net benefits only under very specific and unusual circumstances. Furthermore, the cost–benefit proposition is worse for cities in developing countries than for those in the industrialized world. In closing, we discuss why what looks like an increasingly poor investment decision on the part of cities still receives significant bidding interest and whether changes in the bidding process of the International Olympic Committee (IOC) will improve outcomes for potential hosts. Full-Text Access | Supplementary Materials
\”Retrospectives: How Economists Came to Accept Expected Utility Theory: The Case of Samuelson and Savage,\” by Ivan Moscati Expected utility theory dominated the economic analysis of individual decision-making under risk from the early 1950s to the 1990. Among the early supporters of the expected utility hypothesis in the von Neumann–Morgenstern version were Milton Friedman and Leonard Jimmie Savage, both based at the University of Chicago, and Jacob Marschak, a leading member of the Cowles Commission for Research in Economics. Paul Samuelson of MIT was initially a severe critic of expected utility theory. Between mid-April and early May 1950, Samuelson composed three papers in which he attacked von Neumann and Morgenstern\’s axiomatic system. By 1952, however, Samuelson had somewhat unexpectedly become a resolute supporter of the expected utility hypothesis. Why did Samuelson change his mind? Based on the correspondence between Samuelson, Savage, Marschak, and Friedman, this article reconstructs the joint intellectual journey that led Samuelson to accept expected utility theory and Savage to revise his motivations for supporting it. Full-Text Access | Supplementary Materials