Pregnancy-Related Mortality in the US

When it comes to maternal mortality during pregnancy, the United States not only lags behind other high-income countries, but has been getting worse. The National Academies of Sciences, Engineering, and Medicine tells the story in its report Birth Settings in America: Outcomes, Quality, Access, and Choice Here\’s the trendline of US maternal mortality over time: 

As part of putting this in (grim) perspective, the report notes (citations omitted):

In contrast, the rate of maternal mortality has consistently dropped in most high-resource countries over the past 25 years. Severe maternal morbidity has been increasing in the United States as well. It is estimated that for every woman who dies in childbirth, 70 more come close to dying . All told, more than 50,000 U.S. women each year suffer severe maternal morbidity or “near miss” mortality, and roughly 700 die, leaving partners and families to raise children while coping with a devastating loss. Like the rates of maternal mortality, U.S. rates of severe maternal morbidity are high relative to those in other high-resource countries. In this context, it is notable that some local efforts in the United States have shown progress in reducing rates of maternal mortality and morbidity. In California, for example, the California Maternal Quality Care Collaborative led an initiative that reduced rates of maternal mortality by 55 percent (from 2006 and 2013) … 

The fundamental problem here doesn\’t seem to be an issue of too little overall spending, but more of misallocated spending. US overall healthcare spending is high, and costs of childbirth are a big part of that. The NAS report notes (citations omitted):  

Childbirth is the most common reason U.S. women are hospitalized, and one of every four persons discharged from U.S. hospitals is either a childbearing woman or a newborn . As a result, childbirth is the single largest category of hospital-based expenditures for public payers in the country, and among the highest investments by large employers in the well-being of their employees. Cumulatively, this spending accounts for 0.6 percent of the nation’s entire gross domestic product, roughly one-half of which is paid for by state Medicaid programs.

The discussion in the report suggests that there is too little spent on prenatal care, perhaps some overview of (costly) hospitals as a venue for births compared with other options, and an overuse of some costly care like C-sections
A natural concern is how this might relate to infant mortality. However, US infant mortality has been falling over time. The two main concerns in this area seem to be a rise in the share of children born with low birthweights, together with large discrepancies across groups. Here\’s a figure on low birthweights from a National Center for Health Statistics Data Brief in 2018
Figure 1 is a stacked bar chart showing singleton low, moderately low, and very low birthweight rates from 2006 through 2016.
Here\’s the NAS report on infant mortality differences across groups (citations and references to figures omitted). 

In contrast to maternal mortality, infant mortality in the United States has been declining over the past 20 years, and there are expanded opportunities for survival at increasing levels of prematurity and illness complexity. However, large disparities persist among racial/ethnic groups and between rural and urban populations. In 2017, infant mortality rates per 1,000 live births by race and ethnicity were as follows: non-Hispanic Black, 10.97 per 1,000; American Indian/Alaska Native, 9.21 per 1,000; Native Hawaiian or Other Pacific Islander, 7.64 per 1,000; Hispanic, 5.1 per 1,000; non-Hispanic White, 4.67 per 1,000; and Asian, 3.78 per 1,000. … Rates of preterm birth and low birthweight have increased since 2014, and as with other outcomes, show large disparities by race and ethnicity. Low-birthweight (less than 5.5 pounds at birth) and preterm babies are more at risk for many short and long-term health problems, such as infections, delayed motor and social development, and learning disabilities. About one-third of infant deaths in the United States are related to preterm birth … 

The NAS report does not offer a lot of clear recommendations for what should be done. The discussion of steps that could be taken for quality improvement (QI) is full of statements like: \”While many QI initiatives have shown promising results, many current QI initiatives are underfunded.\” Do tell. But the report does offer some possible models to follow, like the California Maternal Quality Care Collaborative mentioned earlier. 

Some Thoughts on Police Reform

Disputes over policing are of course not new. Andrea M. Headley and James E. Wright, II, look at \”National Police Reform Commissions: Evidence-Based Practices or Unfulfilled Promises?\” (Review of Black Political Economy, 2019, 46:4, pp. 277–305). As they point out, the 1931 National Commission on Law Observance and Enforcement “the Wickersham Commission” was focused on problems like excessive police use of force, how the police should focus more on prevention of crime, and improvement of personnel standards for policy hiring.

Headley and Wright look at later police commissions, including the Kerner Commission report in the aftermath of the 1968 riots, the 2015 President’s Task Force on 21st Century Policing, and the 2018 report from the U.S. Commission on Civil Rights released a detailed report entitled “Police Use of Force: An Examination of Modern Policing Practices.” As they note, later commissions often revisit the topics from the 1931 commission, while adding some additional concerns like improving police-community relations, accountability, transparency, and diversity. 

In short, the current controversies surrounding police departments are not new, which of course is part of what makes them frustrating. Headley and Wright call it a \”wicked problem,\” and write: Wicked problems are characterized by their complex nature, changing circumstances, lasting impact as well as the incomplete information regarding the issue—all of which pose difficulties for solving such problems.\”  
Of course, commissions are often willing to write up a wish list of what the members think should be done. But does the existing research offer useful guidance about what might work? Lack of data has been a severe problem.  For example, as the 2018 report of the US Civil Rights Commission notes, starting on the first page of the \”Executive Summary\”: 

While allegations that some police force is excessive, unjustified, and discriminatory continue and proliferate, current data regarding police use of force is insufficient to determine if instances are occurring more frequently. The public continues to hear competing narratives by law enforcement and community members, and the hard reality is that available national and local data is flawed and inadequate. A central contributing factor is the absence of mandatory federal reporting and standardized reporting guidelines. … [M]oreoer, the data that are available is most frequently compiled by grassroots organizations, nonprofits, or media sources. Data are not only lacking regarding fatal police shootings, but data regarding all use of force are scant and incomplete.

The report then quotes Roland Fryer: \”Data on lower level uses of force, which happen more frequently than officer-involved shootings, are virtually non-existent. This is due, in part, to the fact that most police precincts don’t explicitly collect data on use of force, and in part, to the fact that even when the data is hidden in plain view within police narrative accounts of interactions with civilians, it is exceedingly difficult to extract.\” 

In a similar vein, Headley and Wright note:

Despite the breadth of studies surrounding police use of force, there are still questions that merit more attention. First, scholarship would be enhanced if we could identify why disparities in use of force exist and for which type of officers? For instance, are disparities present due to biases and discrimination on part of the officer? Are they present due to institutionalized racism or biases that are implicitly embedded in department practices, policies, or protocols? Or, are they present due to differences in the quantity and quality of civilian interactions with, and treatment toward, officers? Second, the literature on police use of force tells us very little about when force is not used. Thus, to be able to understand police use of force decision making more broadly, we need to also assess the interactions where force could have been used but was not. Getting at these nuances is key to enhancing the knowledge base on police use of force. To aid in addressing these questions, a national and comprehensive database of police–civilian interactions is warranted.

Headley and Wright also cite an array of evidence about\”community-oriented policing\” (COP), a broad term that usually includes \”the provision of victim services, counseling, community organizing, and education; and the establishment of foot patrols, neighborhood teams/offices, and precinct stations.\” They note: \”The research has generally shown COP positively affects community perceptions and attitudes and thus builds relations, whereas such strategies have very limited, if any, effects on reducing crime.\” Given that improving police-community relations is a goal in itself, COP policies may be worth pursuing, but perhaps with limited expectations about how they will reduce crime.

They describe a knot of potential conflicts that can arise between goals of increasing diversity and the hiring standards that have in some cases limited hiring a more diverse workforce. Headley and Wright comment: 

Police departments across the country are realizing the need to expand their hiring pool while also acknowledging some of the harms that have been done to keep people of color and women out of policing (whether intentionally or not), which provides a fruitful area for future research to assess. For instance, Madison Police Department in Wisconsin has restructured its physical agility test and has increased the number of women police officers hired, whereas St. Paul Police Department (Minnesota) changed its written test requirements (which had disproportionately adverse impacts on applicants of color) to focus more on personal history and community engagement rather than situational testing. Going a step further, Colorado’s Peace Officer Standards and Training Board allows officers who have been arrested for criminal convictions to still be considered for law enforcement positions under certain criteria, whereas other police departments, such as the Burlington Police Department in Vermont, only require legal permanent residency or work authorizations instead of U.S. citizenship. These advances occurring in the police profession open a new door for researchers to conduct pre- and post-evaluations of recruitment and hiring initiatives particularly as it relates to long-term organizational culture and employee performance.

One of the go-to suggestions for any police reform is \”better training.\” They pour a little chilled (if not quite cold) water on this suggestion: 

Training has been one of the most commonly used ways to respond to crises in the policing profession in hopes to affect police behavior. Unfortunately, with the lack of consistency in training across police departments, scholarship has not rigorously or systematically been able to examine the impacts of various types of trainings. This is a huge gap in the existing scholarship that needs to be filled to move the practice of policing forward and improve outcomes.

Headley and Wright are not trying to provide a full overview of the literature. As they point out, many of the problems of policing fall under the heading of \”culture\”–whatever explicit rules exist, they are filtered through the culture of police departments. They do not discuss the issue of police unions, which may in some cases be a substantial barrier to accountability, transparency, and shifts in culture. Katherine J. Bies makes this case in a 2017 essay in the Stanford Law and Policy Review, where she writes: \”Let the Sunshine In: Illuminating the Powerful Role Police Unions Play in Shielding Office Misconduct.\’\’ She writes: 

[D]uring the rise of police unions to political power in the 1970s, police unions lobbied for legislation that shrouded personnel files in secrecy and blocked public access to employee records of excessive force or other officer misconduct. Today, these officer misconduct confidentiality statutes continue to prohibit public disclosure of disciplinary records related to police shootings and other instances of excessive force. Moreover, as the failure of recent sunshine legislation demonstrates, police unions continue to challenge and deter today’s progressive reform efforts that would replace secrecy with accountability and transparency. This Note also argues that police unions are unparalleled in their ability to successfully advocate for policy proposals that conflict with traditional democratic values of accountability and transparency.

The problem of police reform of course involves a desire to minimize grievous abuses or excessive violence, but it\’s much more than that. The terrible cases in the headlines are the 10% of the iceberg that is showing above the waterline. The police need to be able to operate within their communities with some basic level of community support, but in many cities, police in their day-to-day interactions have already lost the trust of a substantial share of the public, and are in danger of losing the trust of many more. 

Exploding US Unemployment Rates: A Peek Inside

US unemployment rates have reached higher levels, and risen in a way that is more dramatic, than at any time since the start of regular employment statistics in the late 1940s. Here\’s the basic picture. The unemployment rate was 14.7%  in April and then dropped unexpectedly (to me, at least!) to 13.3% in May. Even so, looking back over the last 75 years, the monthly unemployment rate has never risen this fast or reached a level this high.  
The explosive rise in the unemployment rate has been accompanied by a sharper decline in jobs than the US economy has experienced in the last 75 years. The figure shows total US employees. As you see, the number rises gradually over the decades, keeping pace with the US population. The total number of jobs drops during or just after recessions, shown by the shaded gray bars. But whether it\’s the Great Recession of 2007-9 or the severe double-dip recession of the early 1980s, the US economy has not seen a drop in total jobs this fast and severe. Total number of jobs was 151 million in March and 130 million in April–a drop of about 14% in a single month–before the gain of about 2.5 million total jobs in May. 
The key question about unemployment is whether there could be a quick bounceback. Are many of these employers poised to resume hiring? Are many of these workers poised to go back to work? One interesting tidbit of evidence here is the share of the unemployed who lost their jobs because of layoffs–which has some implication that they could be readily rehired. Here\’s another striking figure. The share of \”job losers on layoff\” is about 8-15% of the total unemployed from the mid-1980s up to the is around 8-15% of 
One of the shifting labor market patterns in the last 30 years or so has been the disappearance of the \”layoff.\” If you look back at recessions in the 1970s and 1980s, you see that the share of \”job losers on layoff\” rises during recessions, and then falls. It was a much more common pattern for factories and other employer to lay off and then to rehire those same workers. But when you look at the recessions of 1990-91, 2001, and 2007-9, you don\’t see much of a rise in layoffs. Instead, the chance that an unemployed workers was laid off with a plausible prospect of being rehired, rather than just let go, got lower and lower. For example, look how low the percentage falls in the years after the Great Recession. 
But the share of \”job losers\” on layoff just spiked to 78% in April and 73% in May, which implies that large numbers of the unemployed could conceivably be rehired quickly. But of course, a \”layoff\” could become an empty promise, where most of these workers are not rehired, and instead need to find new jobs in the new socially distancing economy. 
I\’ve also been struck by the difference between US and European unemployment data. When US unemployment was spiking to 14.7% in April, unemployment in the 27 countries of the European Union barely nudged up to 6.6% in April; for the subset of 19 countries in the euro zone, unemployment was 7.3% in April. Why did US unemployment spike to double European levels? The likely answer involves interactions between public policy and what is counted as \”unemployment.\” 
One key policy choice is whether assistance to workers has been sent to them directly–say, via unemployment insurance–or whether assistance to worker was funneled through employers, so that workers who were not necessarily going to work still kept receiving a (government-funded) paycheck from their employer. Jonathan Rothwell describes the difference in \”The effects of COVID-19 on international labor markets: An update\” (May 27, 2020, Brookings Institution). 
Here\’s a figure from Rothwell showing the change in workers getting unemployment benefits. Notice that it\’s way up in Canada, Israel, Ireland, and the US. But in France, Germany, Japan, and Netherlands, there\’s essentially no rise in unemployment benefits. 
The reason is that in many countries, a number of worker are getting government assistance via their employers. In the unemployment stats for those countries, they are still counted as employed. Here\’s the figure from Rothwell: 
Another policy choice in the US has been to increase unemployment assistance substantially, so that it is closer to the actual pay that workers receive. Manuel Alcalá Kovalski and Louise Sheiner provide a quick background primer on \”How does unemployment insurance work? And how is it changing during the coronavirus pandemic?\” (Brookings Institution, April 7, 2020). As they write: 

Most state UI [Unemployment Insurance] systems replace about half of prior weekly earnings, up to some maximum. Before the expansion of UI during the coronavirus crisis, average weekly UI payments were $387 nationwide, ranging from an average of $215 per week in Mississippi to $550 per week in Massachusetts. … The CARES Act—a $2 trillion relief package aimed at alleviating the economic fallout from the COVID-19 pandemic—extends the duration of UI benefits by 13 weeks and increases payments by $600 per week through July 31st. This implies that maximum UI benefits will exceed 90 percent of average weekly wages in all states.

In other words, rather than trying to keep laid-off or furloughed workers receiving much the same income via their employer, the US approach has been to do so via the unemployment insurance system. This has caused problems. For lower-wage US workers, the higher unemployment insurance payments cover a substantial part of their typical working income–in some cases, more than 100% of their previous pay. They have a financial incentive not to return to work, even if their employer would like to re-open, until these benefits run out. Of course, other unemployed workers receiving these higher benefits may not have an option to return. In the meantime, other low-wage workers who have kept working in grocery stores, warehouses, delivery services, and from home, are not receiving such payments at all. 
Given that the US policy choice was to funnel assistance to workers through the unemployment system, it\’s not a big shock that the unemployment rate rose so high, so fast. A near-term policy question is whether to extend the higher unemployment payments, perhaps by another six months. The Congressional Budget Office (June 4, 2020) has just released some estimate of the effects of that choice. CBO writes: 

Roughly five of every six recipients would receive benefits that exceeded the weekly amounts they could expect to earn from work during those six months. The amount, on average, that recipients spent on food, housing, and other goods and services would be closer to what they spent when employed than it would be if the increase in unemployment benefits was not extended. … In CBO’s assessment, the extension of the additional $600 per week would probably reduce employment in the second half of 2020, and it would reduce employment in calendar year 2021. The effects from reduced incentives to work would be larger than the boost to employment from increased overall demand for goods and services.

My own sense is that a blanket extension of the additional unemployment benefits is probably the politically easy choice. But the pragmatic choice would be to start thinking more carefully about how structuring these payments in a way that would strike a better balance helping those who need it with incentives to return to work. 

There is a sense in which the very high US unemployment rates both understate and overstate the condition of US labor markets. Unemployment rates, by definition, leave out those who are \”out of the labor force,\” perhaps because added family responsibilities have made it too difficult to work, or the bleak unemployment picture has made it difficult to seek a job. On the other side, some of the unemployed are are hovering in place, ready and able to return to their previous employer, but receiving enhanced unemployment insurance payments in the meantime. 
Estimating these kinds of factors of course involves a bunch of judgement calls. But for an example of such analysis,  Jason Furman and Wilson Powell III have written \”The US unemployment rate is higher than it looks—and is still high if all furloughed workers returned\” (Peterson Institute for International Economics, June 5, 2020). Furman and Powell look at the rise in the number of people “not at work for other reasons” and the rise in the number of people who are out of the labor force. They write: \”Adjusting for these factors our “realistic unemployment rate” was 17.1 percent in May, down from the April value but still higher than any other unemployment rate in over 70 years.\”
They also look at what the unemployment rate would be if those who say they are on layoff all returne to their jobs: \”In total, an additional 14.5 million of the unemployed reported being on temporary layoff. If all of these people were immediately recalled back to work and the labor force adjusted accordingly—a very optimistic scenario—the “full recall unemployment rate” would still be a very elevated 7.1 percent.\”
Either way, the US economy is clearly in the midst of a recession. The question is whether it turns out to be a deep-at-the-start-but-short recession, or deep-at-the-start-and-prolonged recession. The eventual outcome is only partly about economic policy: the coronavirus and public health policy will also play a big role. 

Tales of Frank Ramsey: Economics, Wittgenstein, and More

Economists know Frank Ramsey (1903-1930) mostly through two classic papers written for the Economic Journal in 1927 and 1928, and also as a story of a genius who died at age 26. Cheryl Misak has written the first full biography of Ramsey: Frank Ramsey: A Sheer Excess of Powers, which I have not yet read. But I ran across the review/overview of the book by Anthony Gottlieb in the New Yorker (May 4, 2020), titled and subtitled \”The Man Who Thought Too Fast: Frank Ramsey—a philosopher, economist, and mathematician—was one of the greatest minds of the last century. Have we caught up with him yet?\”
Here, I\’ll rely on Gottleib\’s account to give the barest taste of what was so extraordinary about Ramsey, and the remind economists of the contributions of his two great papers on our field. As Gottlieb notes:

Dons at Cambridge had known for a while that there was a sort of marvel in their midst: Ramsey made his mark soon after his arrival as an undergraduate at Newton’s old college, Trinity, in 1920. He was picked at the age of eighteen to produce the English translation of Ludwig Wittgenstein’s “Tractatus Logico-Philosophicus,” the most talked-about philosophy book of the time; two years later, he published a critique of it in the leading philosophy journal in English, Mind. G. E. Moore, the journal’s editor, who had been lecturing at Cambridge for a decade before Ramsey turned up, confessed that he was “distinctly nervous” when this first-year student was in the audience, because he was “very much cleverer than I was.” . …

His contribution to pure mathematics was tucked away inside a paper on something else. It consisted of two theorems that he used to investigate the procedures for determining the validity of logical formulas. More than forty years after they were published, these two tools became the basis of a branch of mathematics known as Ramsey theory, which analyzes order and disorder. (As an Oxford mathematician, Martin Gould, has explained, Ramsey theory tells us, for instance, that among any six users of Facebook there will always be either a trio of mutual friends or a trio in which none are friends.) …

In 1926, Ramsey composed a long paper about truth and probability which looked at the effects of what he called “partial beliefs”—that is, of people’s judgments of probability. This may have been his most influential work. It ingeniously used the bets one would make in hypothetical situations to measure how firmly one believes a proposition and how much one wants something, and thus laid the foundations of what are now known as decision theory and the subjective theory of probability. …

Economists now study Ramsey pricing; mathematicians ponder Ramsey numbers. Philosophers talk about Ramsey sentences, Ramseyfication, and the Ramsey test. 

For economists, two particular papers stand out: \”A Contribution to the Theory of Taxation\”. (Economic Journal, 1927, 37: 145, pp. 47–61) and “A Mathematical Theory of Saving,” Economic Journal, 1928, 38:4 pp. 543–559. John Maynard Keynes was editor of the EJ, and as Gottleib notes: \”John Maynard Keynes was one of several Cambridge economists who deferred to the undergraduate Ramsey’s judgment and intellectual prowess.\” 
Perhaps fortunately, dear reader, you need not rely on my personal efforts to summarize the influence an insights of these articles. Back in 2015, the Economic Journal on its 125th anniversary published as series of essays reflecting back on the most prominent papers that had appeared in the history of the journal, and two of the 13 papers deemed worthy of remembrance were by Ramsey. 
Joseph E. Stiglitz contributed \”In Praise of Frank Ramsey\’s Contribution to the Theory of Taxation.\” He writes: 

Frank Ramsey\’s brilliant 1927 paper, modestly entitled, ‘A contribution to the theory of taxation’, is a landmark in the economics of public finance. Nearly a half century later, through the work of Diamond and Mirrlees (1971) and Mirrlees (1971), his paper can be thought of as launching the field of optimal taxation and revolutionising public finance. … Here, he addresses a question which he says was posed to him by A. C. Pigou: given that commodity taxes are distortionary, what is the best way of raising revenues, i.e. what is the set of taxes to raise a given revenue which maximises utility. The answer is now commonly referred to as Ramsey taxes. …

Ramsey showed that efficient taxation required imposing a complete array of taxes – not just a single tax. A large number of small distortions, carefully constructed, is better than a single large distortion. And he showed precisely what these market interventions would look like. (He even explains that the optimal intervention might require subsidies – what he calls bounties – for some commodities.  …

In particular, when there are a set of commodities with fixed taxes (including commodities that cannot be taxed at all), he shows that there should be an equi‐proportionate reduction in the goods for which taxes can be freely set. In the case of linear and separable demand and supply curves (quadratic utility functions) and small taxes, he shows that optimal taxes are inversely related to the compensated elasticity of demand and supply. …  Ramsey, however, went beyond this into an exploration of third best economics. He asked, what happens if there are some commodities that cannot be taxed, or whose tax rates are fixed. He argues that the same result (on the equi‐proportionate reduction in consumption) holds for the set of goods that can be freely taxed. …

To boil this down a bit, there is a common intuition that the \”best\” commodity tax will be a tax of the same rate across most or all goods. Ramsey instead emphasizes that if the goal of a tax is to collect money while having that tax distort other behavior as little as possible, then you need to think about demand and supply for each commodity and how they will be affected by a tax. This way of thinking about \”optimal taxation\” has turned out to have very broad applicability. 

Ramsey\’s basic model was not looking at issues of inequality, but his basic framework can readily be adapted to do so.  Stiglitz describes how \”at the centre of modern optimal tax theory and the work growing out of Ramsey lies a balancing of distributional and efficiency concerns.\” Nor was Ramsey\’s model looking at problems with markets like issues of pollution externalities, which his adviser A.C. Pigou was already discussing at that time, but the idea of thinking about how taxes on goods can be adapted to address externalities flows naturally from Ramsey\’s framework. If there is a concern that taxes on labor might encourage some people to shift away from taxed labor to untaxed leisure, one can build on Ramsey\’s approach to advocate taxing goods that are associated with leisure. It turns out hat when a government is thinking about how to regulate the prices charged by a public utility, Ramsey taxes become an important part of thinking about how to balance costs and benefits. 

Orazio P. Attanasio described the influence of Ramsey\’s 1928 paper, in \”Frank Ramsey\’s A Mathematical Theory of Saving.\”  He writes: 

In 1928, Frank Ramsey, a British mathematician and philosopher, at the time aged only 25, published an article (Ramsey, 1928) whose content was utterly innovative and sowed the seeds of many subsequent developments.  … The article sets out to answer an interesting and important question: ‘how much of its income should a nation save?’.

The basic tradeoff here is that more consumption in the present leads to less saving and investment in future growth. Over long periods of time, or successive generations, one wants to think about a rate of saving that makes sense from the standpoint of each generation. Moreover, Ramsey brings into the picture issues like  technical progress, population growth, capital wearing out or being destroyed, and so on. He discusses what we know call an \”overlapping generations\” model, where even if individuals only care about their own lifetimes, the overlap of successive generations keeps propelling us forwad with concerns about the future. As Attanasio points out, a number of later prominent ideas in economics are a reworking and extension of ideas from Ramsey\’s 1928 article. Here are some examples he mentions: 

The most obvious anticipation in the article is its central theme and result: the optimal growth model, as formulated by Ramsey, is very similar to what has become a basic workhorse of modern macroeconomics. In a 1998 interview Cass (1998) recounts that he read Ramsey\’s paper after writing the first chapter of his PhD dissertation in 1963, which eventually became the review of economic studies article (Cass, 1965). Talking about his celebrated 1965 article Cass (1998) says: ‘In fact I always have been kind of embarrassed because that paper is always cited although now I think of it as an exercise, almost re‐creating and going a little beyond the Ramsey model’ (p. 534). … 

When considering the optimal saving problem, Ramsey uses as a first building block an intertemporal consumption problem which essentially defines the permanent income model. … These intuitions and this way of modelling were written 30 years before the publication of Friedman\’s (1957) book and Modigliani and Brumberg\’s (1954) seminal paper on the life cycle model of consumption.

Analogously, the brief description on an economy populated by individuals with ‘different birthdays’ and how their individual savings aggregates into the supply of capital is essentially a description of the overlapping generation model which was Samuelson (1958) developed 30 years later5 and subsequently enriched and studied by Diamond (1965).

Ramsey developed an abdominal infection, underwent surgery, but died in the hospital. He was an avid swimmer, and one possibility is that he picked up a liver infection from swimming in the river. His early death is one of the biggest intellectual what-if stories of the twentieth century. 

Are Firms Too Risk-Averse?

There\’s a plausible argument that from the point of view of investors, firms are too risk-averse. After all, an investor can diversify across lots of firms. If some firms do well and some go broke, the overall return on the investment portfolio can be just fine. But from the standpoint of the managers running a company, the picture looks rather different. They want to protect their own jobs, and the jobs of people working for them. For them, a risky strategy that might have a big upside, but also a substantial possibility of failure, will not be to their personal taste. Indeed, top managers may fear that negative news of even a single investment project that ended up doing poorly could end up being used as a reason for new management to take over. From the point of view of top managers, focusing on low-risk activities like minor tweaks to existing products, me-too versions of product from competitors, and cost-cutting may look personally more attractive than trying to develop and launch a new product. If managers of many companies follow this logic, the level of risk-taking and innovation in the economy as a whole will be reduced. 

Dan Lovallo,  Tim Koller,  Robert Uhlaner and  Daniel Kahneman present some evidence and argumenta on this point in \”Your Company Is Too Risk-Averse\” (Harvard Business Review, March-April 2020). Some of the evidence is from surveys of managers. They write: 

In a 2012 McKinsey global survey, for example, two of us (Koller and Lovallo) presented the following scenario to 1,500 managers: You are considering a $100 million investment that has some chance of returning, in present value, $400 million over three years. It also has some chance of losing the entire investment in the first year. What is the highest chance of loss you would tolerate and still proceed with the investment? A risk-neutral manager would be willing to accept a 75% chance of loss and a 25% chance of gain; one-quarter of $400 million is $100 million, which is the initial investment, so a 25% chance of gain creates a risk-neutral value of zero. Most of the surveyed managers, however, demonstrated extreme loss aversion. They were willing to accept only an 18% chance of loss, much lower than the risk-neutral answer of 75%. In fact, only 9% of them were willing to accept a 40% or greater chance of loss. What’s more, the size of the investment made little difference to the degree of loss aversion. When the initial investment amount was lowered to $10 million, with a possible gain of $40 million, the managers were just as cautious … 

Their argument focuses in particular on middle- and lower-level managers. After all the CEO may be evaluated based on the overall corporate performance, with a mixture of successes and failures. But managers farther down the food chain may have oversight of only one or two main projects, and if their next promotion or bonus is going to be based on the success or failure of this one project, they have a strong incentive to play it (reasonably) safe. The authors offer anecdotal evidence that the \”risk aversion tax\” from their behavior may be as high as one-third. They write: 

So how much money is left on the table owing to risk aversion in managers? Let’s assume that the right level of risk for a company is the CEO’s risk preference. The difference in value between the choices the CEO would favor and those that managers actually make is a hidden tax on the company; we call it the risk aversion tax, or RAT. Companies can easily estimate their RAT by conducting a survey, like Thaler’s, of the risk tolerance of the CEO and of managers at various levels and units.

For one high-performing company we worked with, we assessed all investments made in a given year and calculated that its RAT was 32%. Let that sink in for a moment. This company could have improved its performance by nearly a third simply by eliminating its own, self-imposed RAT. It did not need to develop exciting new opportunities, sell a division, or shake up management; it needed only to make investment decisions in accordance with the CEO’s risk tolerance rather than that of junior managers.

Their solutions involve making risk explicit. In many companies, it\’s can be hard to get a new project approved if you start talking about the full range of risks involved. It\’s a lot easier to set an expectation for what \”success\” might be, and to try get that bar set relatively low, so that \”success\” is more likely to happen. But companies which are more up-front about the range of likely outcomes–from the probabilities of a negative return to the probabilities of large gains–are more likely to see benefits from taking additional risks. And it helps if middle- and low-level managers are only evaluated on what they can personally control: for example, the lower-level managers should get credit within the organization if a new project is carried through on-time and on-plan (the factors they can control), even if it ends up not making any money (the factors they couldn\’t control.

The overall goal, as the authors write, is that \”companies need to switch from processes predicated on managing outcomes to those that encourage a rational calculation of the probabilities.\”

Some Economics of the 1968 US Riots

\”The Kerner report was the final report of a commission appointed by the U.S. President Lyndon B. Johnson on July 28, 1967, as a response to preceding and ongoing racial riots across many urban cities, including Los Angeles, Chicago, Detroit, and Newark. These riots largely took place in African American neighborhoods, then commonly called ghettos. On February 29, 1968, seven months after the commission was formed, it issued its final report. The report was an instant success, selling more than two million copies. …  The Kerner report documents 164 civil disorders that occurred in 128 cities across the forty-eight continental states and the District of Columbia in 1967 (1968, 65). Other reports indicate a total of 957 riots in 133 cities from 1963 until 1968, a particular explosion of violence following the assassination of King in April 1968 (Olzak 2015).\”

 The September 2018 issue of the  Russell Sage Foundation Journal of the Social Sciences includes a 10-paper symposium from a range of social scientists concerning \”The Fiftieth Anniversary of the Kerner Commission Report.\” The introductory essay by Susan T. Gooden and Samuel L. Myers Jr., \”The Kerner Commission Report Fifty Years Later: Revisiting the American Dream\” (pp.  1–17) does an excellent job of setting the historical context and contemporary reactions to the report, along with offering some comparisons that I at least had not seen before about difference between rioting and non-rioting cities over over time.

[This post is republished from my earlier post of September 6, 2018, when this issue came out, with weblinks refreshed and a touch of editing.]

The opening paragraph above is quoted from the Gooden/Myers paper. As they point out, perhaps the most commonly repeated comment from the report was that it baldly named white racism as an underlying cause of the problems. As one example, to quote from the Kerner report: “What white Americans have never fully understood—but what the Negro can never forget—is that white society is deeply implicated in the ghetto. White institutions created it, white institutions maintain it, and white society condones it.”

Although the report was widely disseminated, it was not popular. As Gooden and Myers report:

\”President Johnson was enormously displeased with the report, which in his view grossly ignored his Great Society efforts. The report also received considerable backlash from many whites and conservatives for its identification of attitudes and racism of whites as a cause of the riots. `So Johnson ignored the report. He refused to formally receive the publication in front of reporters. He didn’t talk about the Kerner Commission report when asked by the media,\’ and he refused to sign thank-you letters for the commissioners (Zelizer 2016, xxxii–xxxiii).\”

Other contemporary critics of the report complained that by emphasizing white racism, the report seemed to imply that changes in the beliefs of whites should be the main topic, while not paying attention to institutions and behaviors. Gooden and Myers cite a pungent comment from the American political scientist Michael Parenti, who wrote back in 1970:

\”The Kerner Report demands no changes in the way power and wealth are distributed among the classes; it never gets beyond its indictment of “white racism” to specify the forces in the political economy which brought the black man to riot; it treats the obviously abominable ghetto living conditions as “cause” of disturbance but never really inquires into the causes of the “causes,” viz., the ruthless enclosure of Southern sharecroppers by big corporate farming interests, the subsequent mistreatment of the black migrant by Northern rent-gorging landlords, price-gorging merchants, urban “redevelopers,” discriminating employers, insufficient schools, hospitals and welfare, brutal police, hostile political machines and state legislators, and finally the whole system of values, material interests and public power distributions from the state to the federal Capitols which gives greater priority to “haves” than to “have-nots,” servicing and subsidizing the bloated interests of private corporations while neglecting the often desperate needs of the municipalities. . . . . To treat the symptoms of social dislocation (e.g., slum conditions) as the causes of social ills is an inversion not peculiar to the Kerner Report. Unable or unwilling to pursue the implications of our own data, we tend to see the effects of a problem as the problem itself. The victims, rather than the victimizers, are defined as “the poverty problem.” It is a little like blaming the corpse for the murder.\” 

Gooden and Myers point to another issue with the report that social scientists immediately point out. The members of the Kerner Commission made personal visits to cities that had experienced rioting, and made an effort to talk with people in the affected communities. But they made essentially no effort to visit cities that had not experienced riots. It\’s hard to draw inferences about the causes of riots without making some effort to look at what differs across rioting and non-rioting cities. 

They offer a preliminary look at some of the economic differences across rioting and non-rioting cities. For example, this figure shows the black-white ratio of family incomes in rioting (blue) and nonrioting (orange) cities. The ratio hasn\’t moved much in the cities that had 1960s riots, while it increased substantially in the cities without riots. Indeed, the cities that did not riot have had slightly more equal black-white income ratios for most of the last few decades.  
These sorts of patterns are open to a range of interpretations. Perhaps cities were less likely to riot in the late 1960s if more immediate progress in black-white incomes was happening. Perhaps something about having a higher black-white income ratio at the start made rioting more likely. Perhaps rioting led to an outmigration of middle- and upper-class families of both races, which could contribute to a stagnation of the black-white ratio. The cities that rioted were mainly the northeast, midwest, and west, and so political, social, and economic differences across the geography of the US surely also have played a role. 
In other measures like the black-white ratios of unemployment rates, high school graduation rates, and poverty rates, the rioting and non-rioting cities look very similar. As Gooden and Myers write: 

\”This evidence points to a possible flaw in the Kerner Commission’s report. Although the evidence clearly points to a divided America—a divide that continues today—the trajectories of the riot cities and the nonriot cities are remarkably similar. Thus, it is a bit more difficult to embrace the conclusion that this racial divide was the cause of the riots given that the racial divide was evident in both riot cities and nonriot cities and perhaps was even more pronounced in the nonriot cities than in the riot cities before the riots.\”

For a take on the Kerner Commission report earlier this year, see \”Black/White Disparities: 50 Years After the Kerner Commission\” (February 27, 2018). Here\’s the Table of Contents of this issue of the Russell Sage Foundation Journal, with links to the papers:

Sabotaging the Competition: A Home Construction Example

Why are monopolies bad? In a standard intro-econ textbook, the problem of monopolies is that because of the lack of competition, they can reduce output from what it would otherwise be, jack up prices, and thus earn higher profits. Some books also mention that monopolies may have less incentive for quality or innovation–again, because of a lack of competition.
James A. Schmitz, Jr.  at the Federal Reserve Bank of Minneapolis refers to this standard intro-econ model as a \”toothless\” monopoly, because in that model, all the monopoly firm can do is raise prices. He argues that it doesn\’t capture what bothers most people about monopoly. There\’s also also a concern that monopolies take actions to take action to sabotage and even destroy their rivals–especially the rivals who might have provide low-cost competition.  Moreover, monopolies may form concentrations of power with other monopolies or with with political allies to accomplish this goal, and in this way corrupt institutions of law and politics as well. 
Schmitz is in the middle of a substantial research project that encompasses both the intellectual history of these two views of monopoly and also a set of concrete examples. As a work-in-progress, he has posted \”Monopolies Inflict Great Harm on Low- and Middle-Income Americans\” (Federal Reserve Bank of Minneapolis, Staff Report No. 601, May 2020), which is nearly 400 pages long but described as the \”first essay\” in a collection of essays to be produced in the next year or two. It can usefully be  read as a preliminary overview of an ongoing research project. 
However, it\’s worth noting that Schmitz doesn\’t focus on the conventional everyday meaning of \”monopoly\”–that is, a super-big company that dominates sales within its market. Instead, he referring to \”monopoly power\” in a way that refers to when a group (not just a single large firm) acts restrict competition. Thus, his main examples are where existing producers have exerted political power to sabotage lower-cost competitors, including residential construction, credit cards, legal services, repair services, dentistry, hearing aids, eye care, and others.  
Here, I\’ll just focus on sketching his discussion of residential construction. Schmitz writes: \”The most extensively used technology, by far, is often called the stick-built technology because sticks (two by fours) visually dominate the construction sites. This technology has been used for centuries. Homes are built outside, with a highly labor-intensive technology. It also requires highly-skilled-labor. The other technology is factory-production of homes. This technology substitutes capital for labor and also semi-skilled workers for highly skilled workers.\”
There has been a battle going back about a century between stick-built technology and factory technology for residential construction.  Schmitz traces the early legal conflicts back to the late 1910s. Here\’s a summary comment from Thurman Arnold, who was Assistant Attorney General for Antitrust in the 1930s, in a 1947 article. 

When Arnold left the DOJ, he did not stop challenging monopolies in traditional construction. He did not stop trying to protect producers of factory-built homes. In “Why We Have a Housing Mess,” Arnold (1947) began with a picture of a homeless Pacific War veteran, with his wife and five children, sitting on the street with their belongings (see Figure 2). The caption said: “This Pacific War veteran and his family are homeless because we have let rackets, chiseling and labor feather-bedding block the production of low-cost houses.” Arnold began his text this way: “Why can’t we have houses like Fords [i.e., automobiles]? For a long time, we have been hearing about mass production of marvelously efficient postwar dream houses, all manufactured in one place and distributed like Fords. Yet nothing is happening. The low-cost mass production house has bogged down. Why? The answer is this: When Henry Ford went into the automobile business, he had only one organization to fight [an organization with a patent] . . . But when a Henry Ford of housing tries to get into the market with a dream house for the future, he doesn’t find just one organization blocking him. Lined up against him are a staggering series of restraints and private protective tariffs.\”

Essentially, Arnold and other (including a substantial multi-author research project at the University of Chicago in the late 1940s) claimed that while no one explicitly passed rules to make factory-built housing illegal, building codes were carefully written in a way which had that effect. 
Some standard issues were that local building codes were different everywhere, which was fine for local stick-built construction firms, but posed a problem for a factory producer hoping to ship everywhere. There was often a distinction in building codes about living in \”trailers\” or in permanent structures, in which a \”double-wide\” home brought to the site in two parts was treated as a \”trailer,\” even when it was installed permanently on-site and looked much the same as a stick-built home of similar size. 
In the 1960s, economic pressure had gathered for factory-built homes, which are typically much cheaper on a per-foot basis. But in the 1970s, regulators pushed back hard, with the the newly-created US Department of Housing and Urban Development playing a big role. Here are some snippets from how Schmitz tells the story. 
Many housing industry observers noted that stick-builders were facing such threats from
factory-built home producers in the 1960s. Though they did not have direct measures of
productivity, they compared the costs and prices of new, site-built homes to the costs and
prices of other consumer durables. Alexander Pike (1967), an architect, compared the prices of new homes and the prices of new cars from the 1920s. Though he did not have productivity statistics, his point was clear: the productivity of construction badly lagged that of the car industry. At roughly the same time, the research department of Morgan Guaranty Trust Company (1969) wrote about this productivity divergence when discussing the potential for industrialized housing … in “Factory-Built Houses: Solution for the Shelter Shortage?” They noted the serious problems facing the stick built industry as its productivity lagged. They showed that, over the period 1948-68, the prices of consumer durables rose roughly 22 percent, while residential construction costs rose roughly 100 percent.
Modular construction for single-family homes took off in the 1960s. Schmitz cites statistics that they \”increased from roughly 100,000 units to 600,000 units\” annually. \”The share of factory production of
single-family residential homes began growing in the mid 1950s, rising from about 10 percent
of home production to nearly 60 percent of home production by the beginning of the 1970s
(where total home production equals stick-built production plus factory production).\”
But the stick-built industry, assisted by local and federal regulators, pushed back 

While the sabotage of factory housing has been going on for 100 years, there was a dramatic surge in the ferocity of this sabotage in the middle 1970s. During this period, laws were passed, and regulations implemented, that sent the factory-built housing industry into a tailspin. These regulations, and additional harmful ones introduced since the 1970s, remain on the books and mean the industry is a shell of its former self. When this new sabotage was unleashed in the middle 1970s, the producers of factory homes were well aware of it, of course. They fought the HUD and NAHB monopolies to reverse the sabotage but lost the fight. Today the members of the factory-built housing industry are unaware of this history.

As Schmitz documents, the pushback came in many forms, including regulations and subsidies. As one  example: Who knows how high the factory share would have risen if new sabotage of factory production would not have commenced in 1968. At that time, a national subsidy program was started
for households buying stick-built homes (see below). Under these programs, households purchased 430,000 stick-built homes (per year) in the early 1970s.\” There have been court battles, and the \”is it a trailer, is it a house\” battle has been refought many times. For example, there is often a rule that a manufactured home must be built on a permanent and unremovable chassis–like a trailer–even though that\’s not what many customers would want. 

For those with at taste for irony, there were also complaints from stick-built construction firms that manufactured housing was \”unfair competition\” because it could be built so much less expensively. Schmitz cites estimates from the US Census Bureau in 2007 that manufactured homes are one-third the price per square foot. One suspects that if manufactured housing was encouraged and allowed to flourish, the cost advantage from economies of scale would only increase. 
The US economy is widely acknowledged to have a shortage of affordable housing. It has also for a century has monopolizing, competition-reducing forces that have favored more expensive stick-built housing and sabotaged the economic prospects of manufactured housing. As Schmitz points out, whatever defense one wishes to offer for these kinds of competition-restricting rules, the unavoidable fact is that the costs of the rule are carried by those of low and middle income levels, who would benefit most from lower prices. 

For readers who are interested in antitrust discussions as they apply to the FAGA companies (Facebook, Amazon, Google, and Apple), here are a couple of earlier posts that offer a starting point.