In Hume’s spirit, I will attempt to serve as an ambassador from my world of economics, and help in “finding topics of conversation fit for the entertainment of rational creatures.”
If you don\’t have a bank account, then you pay extra for many day-to-day financial transactions. Need a check cashed? Lots of non-bank places will do that–for a fee. Need a money-order to pay a bill? Lot\’s of non-bank places do that–for a fee. Need a loan? Payday loans and rent-to-own stores and pawnshops are available–for a fee. Of course, banks have fees, too, but the unbanked typically pay a lot more for basic financial transactions. In addition, those who live in the cash economy often find it harder to save for an emergency, and are highly vulnerable to losing a substantial part of their assets if their cash is stolen or lost.
The FDIC does an annual survey in partnership with the U.S. Census Bureau to find out more about the \”unbanked,\” who lack any deposit account at a banking institution, and the \”underbanked,\” who have a bank account but also rely on providers of \”alternative financial services\” like payday loans, pawnshops, non-bank check cashing and money orders, and the like. The results of the 2011 survey have now been released in \”2011 FDIC National Survey of Unbanked and Underbanked Households.\” From the start of the report, here are some bullet-points (footnotes omitted):
\”• 8.2 percent of US households are unbanked. This represents 1 in 12 households in the nation, or nearly 10 million in total. Approximately 17 million adults live in unbanked households. … • 20.1 percent of US households are underbanked. This represents one in five households, or 24 million households with 51 million adults…. • 29.3 percent of households do not have a savings account, while about 10 percent do not have a checking account. About two-thirds of households have both checking and savings accounts. • One-quarter of households have used at least one AFS product in the last year, and almost one in ten households have used two or more types of AFS products. In all, 12 percent of households used AFS products in the last 30 days, including four in ten unbanked and underbanked households.\”
The survey provides considerable detail about the unbanked and the underbanked. For example, about 30% of the unbanked don\’t use any of the \”alternative financial services\”–and thus are living in something close to a pure cash economy. Nearly half of unbanked household had a bank account at some point in the past and nearly half report that they are likely to have a bank account in the future.
Some people will prefer to live in a non-bank world. I suspect that a substantial number of them are in the underground economy, staying under the government\’s radar and avoiding taxes. About 5.5% of those in the survey report that they can\’t open a bank account because of identification, credit, or banking history problems. But there are also a substantial number of the unbanked who have notions about bank accounts that are misleading or false: like a belief that they don\’t have enough money to open a bank account or in some way wouldn\’t \”qualify\” to open an account. Many of the unbanked also like the convenience and speed of dealing with nonbank firms that cash checks or give instant loans, and they are familiar with these firms in their neighborhoods.
But I fear that many of the unbanked dramatically underestimate the size of the fees that they pay for dealing with these alternative financial service providers, and have little notion of the programs at many banks that are designed to provide services to those who will tend to have low balances.
The Congressional Budget Office calculates \”potential GDP,\” which is the amount that the economy would produce at full employment. During a recession, actual economic output is below potential GDP; during an extreme economic boom, like the dot-com boom of the late 1990s, the economy can for a time have output greater than potential GDP. Here\’s a graph showing potential GDP in blue and actual GDP in red, both in real dollars from 1960 up through the mid-2012, generated by the ever-useful FRED website of the Federal Reserve Bank of St. Louis.
The graph does usefully show the depth of the current recession, and other recessions, as well as how actual GDP climbs above potential GDP in the dot-com boom of the late 1990s, as well as during the guns-and-butter period of the late 1960s and the housing boom of the mid-2000s. But you do have to squint a bit to make it all out! And your eye can be fooled in thinking about the depth of recessions, because the graph shows the gaps in absolute levels, not in percentage terms. Thus, when GDP is much lower back in the 1960s, the absolute gap may appear small, but the percentage gap could be larger.
So here\’s a graph based on the same data that shows the percentage amount by which actual GDP was above or below potential GDP in the years from 1960 up through mid-2012.
A few themes jump out from looking at the data in this way:
1) If the Great Recession is measured according to how far the economy had fallen below potential GDP, it is actually quite similar to the effects of the double-dip recession in the early 1980s.
2) If the Great Recession is measured by the size of the drop, relative to potential GDP, it is about 9 percentage points of GDP (from an actual GDP 1 percent above potential GDP to an actual GDP that is 8 percent below potential GDP). The total size of this drop isn\’t all that different–although the timing is different–from the years around the double-dip recession of the 1980s, the years around the recession of 1973-75, and the recession of 1969-1970.
3) The recovery from the early 1980s recessions was V-shaped, while the recovery from the Great Recession is more gradual. But this change isn\’t new. The recoveries from all the recessions before the early 1980s were reasonably V-shapes, and the recoveries after the 1990-91 and 2001 recessions were more U-shaped, as well.
4) The most red-hot time for the U.S. economy in this data, in the sense that the economy was running unsustainably ahead of potential GDP for a time, was what is sometimes called the \”guns-and-butter\” period of the late 1960s an early 1970s, when the federal government spent on both social programs and the military at the same time. In the dot-com boom, the economy was also well above potential GDP. The U.S. economy was also unsustainably above potential GDP during the housing boom around 2005-6, but it wasn\’t as white-hot a period of economic boom as these others.
Why has the economic recovery been so sluggish? One set of possible explanations is rooted in the idea of economic uncertainty: in other words, the financial meltdowns of 2008 and 2009, along with major legislation affecting health care and the financial sector, along with ongoing disputes over budget and tax policy, along with the shakiness of the euro area, have all come together to create a situation where businesses are reluctant to invest and hire, and consumers are reluctant to spend.
Of course, the difficulty with explanations that evoke \”uncertainty\” is that, from an empirical point of view, measusuring uncertainty can be like trying to nail jello to a wall: it\’s messy, and when you\’re finished, you can\’t be confident that you\’ve accomplished much. Sylvain Leduc and Zheng Liu describe their efforts to measure uncertainty and connect it with macroeconomic outcomes in \”Uncertainty, Unemployment, and Inflation,\” an \”Economic Letter\” written for the Federal Reserve Bank of San Francisco.
For starters, they use a two-part measure of uncertainty. One part is the \”consumer confidence\” survey that has been carried out since 1978 by the University of Michigan, now in partnership with Thomson/Reuters. They write: \”Since 1978, the Michigan survey has polled respondents each month on whether they expect an “uncertain future” to affect their spending on durable goods, such as motor vehicles, over the coming year. Figure 1 plots the percentage of consumers who say they expect uncertainty to affect their spending.\” The other ingredient is the VIX index, which measures the volatility of the Standard & Poor\’s 500 stock market index: that is, it\’s not just measuring whether the stock market is rising or falling, but rather measuring whether the jumps in either direction are relatively large or small. As they point out: \”The VIX index is a standard gauge of uncertainty in the economics literature.\” Here\’s a graph showing these two measures of uncertainty, with time periods of recession shaded.
These two measures of uncertainty don\’t always move together. For example, in the late 1990s during the dot-com boom, consumer uncertainty looked low, but volatility in the stock market made the VIX index high. Conversely, in the aftermath of the 1990-91 recession, consumer uncertainty looked high, but uncertainty in the stock market as measured by the VIX index was toward the bottom of its range. However, during the Great Recession, both kinds of uncertainty spiked.
The authors work with data on these movements in uncertainty, compared and contrasted with what actual macroeconomic data is announced before and after the surveys, in an effort to figure out how much uncertainty by itself made the recession worse. They write:
\”We calculate what would have happened to the unemployment rate if the economy had been buffeted by higher uncertainty alone, with no other disturbances. Our model estimates that uncertainty has pushed up the U.S. unemployment rate by between one and two percentage points since the start of the financial crisis in 2008. To put this in perspective, had there been no increase in uncertainty in the past four years, the unemployment rate would have been closer to 6% or 7% than to the 8% to 9% actually registered. While uncertainty tends to rise in recessions, it’s not the case that it always plays a major role in economic downturns. For instance, our statistical model suggests that uncertainty played essentially no role during the deep U.S. recession of 1981–82 and its following recovery. This is consistent with the view that monetary policy tightening played a more important role in that recession. By contrast, uncertainty may have deepened the recent recession and slowed the recovery because monetary policy has been constrained by the Fed’s inability to lower nominal interest rates below zero…\”
Back in April, I described another attempt to measure economic uncertain in \”Is Policy Uncertainty Delaying the Recovery?\” In that study, R. Baker, Nick Bloom, and Steven J. Davis created an index of economic uncertainty based on three different factors: newspaper articles that refer to economic uncertainty and the role of policy; the number of federal tax code provisions that are set to expire; and the extent of disagreement among economic forecasters. Their central finding: \”The results for the United States suggest that restoring 2006 (pre-crisis) levels of policy uncertainty could increase industrial production by 4% and employment by 2.3 million jobs over about 18 months.\” This is roughly the same magnitude of effect as Leduc and Liu find, although the two sets of researchers are using different data for meausuring uncertainty and different approaches for connecting uncertainty to macroeconomic outcomes.
It\’s hard to believe that economic uncertainty will drop a lot before Election Day 2012. But one way or another, it seems likely to decline after that.
Aaron Steelman conducted an \”Interview\” with John List that appears in the most recent issue of Region Focus from the Federal Reserve Bank of Richmond (Second/Third Quarter 2012, pp. 32-38). Full disclosure: John List is one of the co-editors of my own Journal of Economic Perspectives. But in his research, John is probably best-known for being a leader in the area of taking randomized controlled experiments out of the laboratory and moving them into the field.
Here\’s an early example from List\’s work:
\”So let’s go through an example whereby I think I can convince you that I am in a natural environment and that I’m learning something of importance for economics. I first got interested in charitable fundraising in 1998 when a dean at the University of Central Florida asked me to raise money for a center at UCF. … Many charities have programs where they will match a donor’s gift. So your $100 gift means that the charity will get $200 after the match. Interestingly, however, when you go and ask those charities if matching works they say, “Of course it does, and a 2-to-1 match is much better than a 1-to-1 match, and a 3-to-1 match is better than either of them.” So I asked, “What is your empirical evidence for that?” They had none. Turns out that it was a gut feeling they had.
\”I said, well, why don’t you do field experiments to learn about what works for charity? … So what we are going to do is partner with them in one of their mail solicitations. Say they send out 50,000 letters a month. We will then randomize those 50,000 letters that go directly to households into different treatments. One household might receive a letter that says, “Please give to our charity. Every dollar you give will be matched with $3 from us.” Another household might receive the exact same letter, but the only thing that changes is that we tell them that every dollar you give will be matched by $2. Another household receives a $1 match offer. And, finally, another household will receive a letter that doesn’t mention matching. So you fill these treatment cells with thousands of households that don’t know they’re part of an experiment. We’re using randomization to learn about whether the match works. That’s an example of a natural field experiment — completed in a natural environment and the task is commonplace.
\”I didn’t learn that 3-to-1 works better than 2-to-1 or 1-to-1. Empirically, what happens is, the match in and of itself works really well. We raise about 20 percent more money when there is a match available. But, the 3-to-1, 2-to-1, and 1-to-1 matches work about the same.\”
How does List respond to the concern that we are unlikely to learn much of interest from these kinds of experiments, because the real world is just too messy for cause and effect to be discerned?
\”So I come along, and I say we really need to use the tool of randomization, but we need to use it in the field. Here’s where the skepticism arose using that approach: People would say, “You can’t do that, because the world is really, really messy, and there are a lot of things that you don’t observe or control. When you go to the marketplace, there are a lot of reasons why people are behaving in the manner in which they behave. So there’s no way — you don’t have the control — to run an experiment in that environment and learn something useful. The best you can do is to just observe and take from that observation something of potential interest.
\”That reasoning stems from the natural sciences. Consider the example with the chemist: If she has dirty test tubes her data are flawed. The rub is that chemists do not use randomization to measure treatment effects. When you do, you can balance the unobservables — the “dirt” — and make clean inference. As such, I think that economists’ reasoning on field experiments has been flawed for decades, and I believe it is an important reason why people have not used field experiments until the last 10 or 15 years. They have believed that because the world is really messy, you can’t have control in the same way that a chemist has control or a biologist might have control. …
\”When I look at the real world, I want it to be messy. I want there to be many, many variables that we don’t observe and I want those variables to frustrate inference. The reason why the field experiments are so valuable is because you randomize people into treatment and control, and those unobservable variables are then balanced. I’m not getting rid of the unobservables — you can never get rid of unobservables — but I can balance them across treatment and control cells. Experimentation should be used in environments that are messy; and I think the profession has had it exactly backwards for decades. They have always thought if the test tube is not clean, then you can’t experiment. That’s exactly wrong. When the test tube is dirty, it means that it’s harder to make proper causal inference by using our typical empirical approaches that model mounds and mounds of data.\”
And here\’s List arguing that many institutions, including education, should be continually involved in new natural field experiments, so that we can do a better job of figuring out what actually works.
\”I think in many ways, it’s harder to overturn entrenched thinking in parts of the nonprofit, corporate, and public sectors, where many things are not subject to empirical testing. For instance, why don’t we know what works in education? It’s because we have not used field experiments across school districts. Each school district should be engaged in several experiments a year, and then in the end the federal government can say, “Here’s what works. Here’s a new law.” It’s unfair to future generations to pass along zero information on what policies can curb criminal activities, what policies can curb teen pregnancy, what are the best ways to overcome the racial achievement gap, why there aren’t more women in the top echelon of corporations. We don’t know because we don’t understand, we haven’t
engaged in feedback-maximization. There needs to be a transformation, and I don’t know what it’s going to take. I mean, are we going to be sitting here in 50 years and thinking, “If we only knew what worked to help close the achievement gap, if we only knew how to do that”?
\”I hope my work in education induces a sea change in the way we think about how to construct curricula. Right now, we are doing a lot of work on a prekindergarten program in Chicago Heights and in a year or two I think that we will be able to tell policymakers what will help kids — and how much it will help them. But unless people adopt the field experimental approach more broadly, it will be a career that’s not fulfilled in my eyes.\”
Last week the Bureau of the Census released its annual report that estimates the poverty rate in the previous year, and I posted about \”What the Official Poverty Rate is Missing,\” with my discussion focusing on various government anti-poverty programs that don\’t show up in the measure of income and the problems involved in measuring poverty by income rather than by using consumption. The same day, the Brookings Institution held a conference for various well-informed folks to react to the Census report, like Ron Haskins, Richard Burkhauser, Gary Burtless, Isabel Sawhill, Kay Hymowita, Wendell Primus. Many of them took an approach broadly similar to my own–that is, slicing and dicing the numbers to figure out the trends and patterns and strengths and weaknesses of the data.
But I was taken by the comments of Ralph Smith, senior vice-president of the Annie E. Casey Foundation, who focused on what the poverty numbers mean in more human sense for the prospects of children. The quotation is taken from the uncorrected transcript of the event, which is posted here. Smith said:
\”There’s an antiseptic quality about the charts and graphs and the PowerPoint that feels to me as if it misses the issue and misses the reality of the lives of the people and the families about whom we speak. … I just can’t get to the point where I’m so captured by the data that I miss what these numbers mean for the lives and futures of the families about whom we speak, about the material conditions in which they live, about the aspirations they could hold onto for their kids and for the next generation. \”And I will confess a discomfort as I think about the one million children who despite these not-quite-so-bad numbers will be born into poverty next year. One million new entrants into poverty, and what we can predict now. And what we can certify on the day they are born is that more than 50 percent of them will spend half their childhoods in poverty. Twenty-nine percent of them will live in high poverty communities. Ten percent of them will be born low birth weight, a key indicator of cognitive delays and problems in school. Only 60 percent of them will have access to health care that meets the criteria for having a medical home. By age three, fewer than 75 percent of them will be in good or excellent health, and they’ll be three times more likely than their more affluent peers to have
elevated blood lead levels. \”More than 50 percent of them will not be enrolled in pre-school programs and by the time they enter kindergarten, most of them will test 12 to 14 months below the national norms in language and pre-reading skills. Nearly 50 percent of them will start first grade already two years behind their peers. During the early grades, these children are more likely to miss more than 20 days of school every year starting with kindergarten, and that record ofchronic absence will be three times that of their peers. When tested in fourth grade, 80 percent of these children will score below proficient in reading and math. We know now that 22 percent of them will not graduate from high school, and that number rises to 32 percent for those who spend more half of their childhood in poverty. And to no one’s surprise, these sad statistics and deplorable data get even worse for children of color and children who live in communities of concentrated poverty. …
\”[T]his report brings bad news about a predictably bleak future in this the land of opportunity. … We don’t spend as much as we need to, but until we do better with what we have, we’re not going to make the case for what we need. And we don’t care as much as we say we do because some kids matter more than others and some kids matter not at all. And I think these million kids are the kids who might matter not at all. And so when I see the numbers, I must admit that I flinch and I think they ought to as well because for these children, the numbers that matter most to their futures and to ours are one, the income of their parents, and two, the zip code of their homes. …
\”The view that I agree with most is the one that recognizes that persistent poverty is the challenge of our time. Like the world wars, the Great Depression, civil rights, persistent poverty is worthy of an engaged national as well as federal government. … Imagine that in 2015 candidates as they stumped in Iowa and New Hampshire and North Carolina had to confront the issue of persistent poverty and had to talk about it. And imagine that in 2016 there was a debate where a reporter would even ask a question about it, but where candidates would feel compelled to articulate their position. Most of us in this room, as good and as smart as we are, we cannot imagine that happening.\”
I\’ll just add that it has been remarkable to me during the last few years of sustained high unemployment and families under stress, how much our national political discussion has focused on the merits of different tax levels for those with high incomes, and how little our national political discussion has focused in any concrete way on how to assist the poor, and in particular on how to alter the trajectory of life for children living in poverty.
First, just as a matter of getting the facts straight, CEO pay relative to household income did spike back in the go-go days of the dot-com boom in the late 1990s, but since then, it is relatively lower. Kaplan argues that there are two valid ways to measure executive pay. One measure looks at actual pay received, which he argues is useful for seeing whether top executives are paid for performance. The other measure looks at \”estimated\” pay, which is the amount that financial pay packages would have been expected to be worth at the time were granted. This calculation requires putting a value on stock options, restricted stock grants, and the like, and estimating what these were worth at the time the pay package was given. Kaplan argues that this measure is the appropriate one for looking at what corporate boards meant to do when they agreed on a compensation package.
Here\’s one figure showing actual average and median pay totals for S&P 500 CEOs from 1993 to 2010. Average pay is above median pay, which tells you that there are some very high-paid execs at the top pulling up the average. Also, average CEO pay spikes when the stock market is high, as in 2000 and around 2006 an 2007. Median realized pay seems to have crept up over time. Here\’s a figure showing estimated pay–that is, the value of the pay packages when they were granted. But this time, instead of showing dollar amounts, this graph shows average and median CEO pay as a multiple of median household income. Average pay again spikes at the time of the dot-com boom. Kaplan emphasizes that estimated CEO pay is on average lower than in 2000 and that the median hasn\’t risen much. My eye is drawn to the fact that median pay for CEOs goes from something like 60 times median household income back in 1993 to about 170 times median household income by 2010.
An obvious question is whether these pay levels are distinctive for CEOs, or whether they are just one manifestation of widening income inequality across a range of highly-paid occupations. Kaplan makes a solid case that it is the latter. For example, here\’s a graph showing the average pay of the top 0.1% of the income distribution compared with the average pay of a large company CEO.Again, the story is that CEO pay really did spike in the 1990s, but by this measure, CEO pay relative to the top 0.1% is now back to the levels common in the the 1950s.
Kaplan also points out that the pay of those at the top of other highly-paid occupations has grown dramatically as well, like lawyers, athletes, and hedge fund managers. Here\’s a figure showing the pay of top hedge fund managers relative to that of CEOs in the last decade. Kaplan writes: \”The top 25 hedge fund managers as a group regularly earn more than all 500 CEOs in the S&P 500. In other words, while public company CEOs are highly paid, other groups with similar backgrounds and talents have done at least equally well over the last fifteen years to twenty years. If one uses evidence of higher CEO pay as evidence of managerial power or capture, one must also explain why the other professional groups have had a similar or even higher growth in pay. A more natural interpretation is that the market for talent has driven a meaningful portion of the increase in pay at the top.\”
Kaplan also compiles evidence that CEOs of companies with better stock market performance tend to be paid more than those with poor stock market performance, and that CEOs have shorter job tenures. He writes: Turnover levels since 1998 have been higher than in work that has studied previous periods. In any given year, one out of 6 Fortune 500 CEOs lose their jobs. This compares to one out of 10 in the 1970s. CEOs can expect to be CEOs for less time than in the past. If these declines in expected CEO tenures since 1998 are factored in, the effective decline in CEO pay since then is larger than reported above.\” And the CEO turnover is related to poor firm stock performance …\”
To me, Kaplan makes a couple of especially persuasive points: the run-up in CEO salaries was especially extreme during the 1990s, and less so since then (depending on how you measure it); and the run-up in CEO salaries reflects the rise in inequality across a wider swath of professions. While I believe the arguments that job tenure can be shorter for the modern CEO, especially if a company isn\’t performing well, it seems to me that most former CEO\’s don\’t plummet too many percentiles down the income distribution in their next job, so my sympathy for them is rather limited on that point.
In this paper, Kaplan doesn\’t seek to address the deeper question of why the pay for those at the very top, CEOs included, has risen so dramatically. While the demand for skills at the very top of the income distribution is surely part of the answer, I find it hard to believe that these rewards for skill increased so sharply in the 1990s–just coincidentally during a stock market boom. It seems likely to me that cozy institutional arrangements for many of those at the very top of the income distribution–CEOs, hedge fund managers, lawyers, and athletes and entertainers–also plays an important role.
Yesterday the U.S. Bureau of the Census released its annual report on \”Income, Poverty, and Health Insurance Coverage in the United States: 2011,\” this year written by Carmen DeNavas-Walt, Bernadette D. Proctor, and Jessica C. Smith. One finding is that the official U.S. poverty rate barely budged from 2010 to 2011, which if not positive news, is at least non-negative news. Here\’s a figure showing the number of people in poverty and the poverty rate since 1959:
One set of problems is clear: some of the largest government programs to help those in poverty have zero effect on the officially measured poverty rate. For example, Food stamps are technically a noncash benefit (even if in many ways they are similar to receiving cash), so they are not counted in the definition of income used for calculating the poverty rate. The Earned Income Tax Credit operates through the tax system, it is not covered in the definition of \”money income before taxes\” used to measure poverty. The same is true of the child credit given through the tax code. Medicaid assistance for those with low incomes is not cash assistance, so it doesn\’t reduce the measured poverty rate, either.
The fact that many anti-poverty programs have no effect on officially measured poverty is no secret. The Census report itself carefully notes: \”The poverty estimates in this report compare the official poverty thresholds to money income before taxes, not including the value of noncash benefits. The money income measure does not completely capture the economic well-being of individuals and families, and there are many questions about the adequacy of the official poverty thresholds. Families and individuals also derive economic well-being from noncash benefits, such as food and housing subsidies,and their disposable income is determined by both taxes paid and tax credits received.\” As an example, the report points out that if EITC benefits were included as income, the number of children in poverty would fall by 3 million.
But there is a deeper issue with the official poverty rate, which is that it is measured on the basis of income, not on the basis of consumption. In a given year, a household\’s level of income and its level of consumption don\’t always match up. It\’s easy to imagine an example, especially in the last few years with sustained high unemployment rates, that some households had low income in a given year, but were able to draw on past savings, or perhaps to borrow based on credit cards or home equity. It\’s possible that measured by income, such households appears to be in poverty, but if measured by consumption (and especially if they own their own homes), they would not appear quite as badly off.
Meyer and Sullivan look at how those classified as \”poor\” would be different if using a poverty rate based on consumption, vs. a poverty rate based on income. They emphasize the poverty rate can be the same whether it is based on consumption or on income: it\’s just a matter of where the official poverty line of consumption or income is set. Thus, the poverty rate is not automatically higher or lower because it is based on income or consumption. (For those who care about these details, the official poverty meausure looks at income as measured by the Current Population Survey, while Meyer and Sullivan look at consumption as measured by the Consumer Expenditure Survey.)
Meyer and Sullivan offer a fascinating comparison: they look at 25 characteristics that seem intuitively related to household well-being: total consumption; total assets; whether the household has health insurance; whether it owns a home or a car; how many rooms, bedrooms, and bathrooms in the living space; whether the living space has a dishwasher, air conditioner, microwave, washer, dryer, television, or computer, whether the head of household is a college graduate; and others. It turns out that if one looks at poverty by income and by consumption, with the poverty rates set to be equal in both categories, 84% of the people are included in either definition. But those who are \”poor\” by the consumption definition of poverty are worse off in 21 of the 25 categories of household well-being.
Why is this? In part, it\’s because the total value of consumption includes the funds received Food Stamps, the Earned Income Tax credit, and so on. Some of those who fall below the poverty line when these are not considered, in the official income-before-taxes poverty measure, rise above the poverty line when these are included. In addition, consumption poverty better captures those who don\’t have other resources to fall back on, so those whose income is temporarily low enough to fall below the poverty line, but have other ways to keep their consumption from falling as much, don\’t show up as falling below a consumption-based poverty line.
Setting a poverty line is a political decision, not a law of nature. Some decisions will always be second-guessed, and those who care about the details of what happens with different poverty lines can go to the Census Bureau website and construct alternative measures of the poverty rate based on different measures of income or different ways of defining poverty. But that said, it seems downright peculiar to have an official income-based measure of poverty that isn\’t affected at all by several of the largest anti-poverty programs. And it seems peculiar to base our official measure of poverty on income, when the fundamental concept of poverty is really about having a shortfall of consumption.
I know that after the last few years of arguments over the Patient Protection and Affordable Care Act of 2010, many of us are burned out with arguments over the U.S. health care system. We all know that the U.S. system has some of the finest and most innovative care in the world, along with extremely high costs and tens of millions of people without health insurance. But still, most people don\’t realize just how screwed up the U.S. health care system has become.
For chapter and verse, I recommend the Institute of Medicine Report, Best Care at Lower Cost: The Path to Continuously Learning Health Care in America. The Institute of Medicine is an independent, nonprofit organization, \”the health arm of the National Academy of Sciences.\” The report is chock-full of interesting insights and comments; all quotations here are from the prepublication copy uncorrected proofs, which is available with free registration here. As is my custom, I\’ll delete citations and footnotes for readability. Here are a couple of palate cleansers, as the restauranteurs would say, before plunging into the main course:
\”The tragic life of Dr. Ignaz Semmelweis offers an example of the challenges faced in building a truly learning health care system. The Hungarian physician observed that simply washing hands could drastically reduce high rates of maternal death during childbirth. But since he could not prove a connection between hand washing and the spread of infection, he was ridiculed and ignored. Hounded out of his profession, he died in a mental hospital. More than 165 years later, half of clinicians still do not regularly wash their hands before seeing patients.\”
\”If banking were like health care, automated teller machine (ATM) transactions would take not seconds but perhaps days or longer as a result of unavailable or misplaced records.
If home building were like health care, carpenters, electricians, and plumbers each would work with different blueprints, with very little coordination.
If shopping were like health care, product prices would not be posted, and the price charged would vary widely within the same store, depending on the source of payment.
If automobile manufacturing were like health care, warranties for cars that require manufacturers to pay for defects would not exist. As a result, few factories would seek to monitor and improve production line performance and product quality.
If airline travel were like health care, each pilot would be free to design his or her own preflight safety check, or not to perform one at all.\”
And what are some consequences of this system, which has focused so heavily on procedures for the direct delivery of care, but so little on developing a genuinely integrated system for figuring out how and when to use these procedures? One problem is 75,000 deaths each year that should have been preventable.
\”One way to measure this impact is through mortality amenable to health care, defined as the number of deaths that should not occur in the presence of timely and effective health care. Examples of amenable mortality include childhood infections, surgical complications, and diabetes. The level of amenable mortality varies almost threefold among states, ranging from 64 to 158 deaths per 100,000 population. If all states had provided care of the quality delivered by the highest-performing state, 75,000 fewer deaths would have occurred across the country in 2005.\”
Another problem is that perhaps a quarter or so of the $2.8 trillion that the U.S. will spend this year on health care is essentially excess costs. Here\’s the breakdown from the IOM report into unnecessary services, inefficiently delivered services, excess administrative costs, prices that are too high, missed prevention opportunities, and fraud. As the report points out, several studies with independent methodologies have found that the U.S. health care system could save something like $750 billion per year with no negative effect on health. Even saving half that amount, of course, would have extraordinary consequences for helping people see bigger raises in their paychecks and putting a huge dent in federal budget deficits.
How can the U.S. health care system simultaneously waste one-fourth or so of the king\’s ransom that it spends each year, while still allowing 75,000 preventable deaths each year? The long answer is in the IOM report. The shorter answer, I would say, is that health care is increasingly complex in a way that requires systematic attention to collecting information and coordinating care–and the fragmented and helter-skelter U.S. health care system has not been good at these tasks.
Here are some examples of these problems of growing complexity and lack of coordination, many more of which are scattered through the IOM report:
\”The prevalence of chronic conditions, for example, has increased over time. In 2000, 125 million people suffered from such conditions; by 2020, that number is projected to grow to an estimated 157 million … Almost half of those over 65 receive treatment for at least one chronic disease, and more than 20 percent receive treatment for multiple chronic diseases; fully 75 million people in the United States have multiple chronic conditions. Managing these multiple conditions requires a holistic approach, as the use of various clinical practice guidelines developed for single diseases may have adverse effects. For example, existing clinical practice guidelines would suggest that a hypothetical 79-year-old woman with osteoporosis, osteoarthritis, type 2 diabetes, hypertension, and chronic obstructive pulmonary disease should take as many as 19 doses of medication per day. Such guidelines might also make conflicting recommendations for the woman’s care.\”
\”Care delivery also has become increasingly demanding. It would take an estimated 21 hours a day for individual primary care physicians to provide all of the care recommended to meet their patients’ acute, preventive, and chronic disease management needs. Clinicians in intensive care units, who care for the sickest patients in a hospital, must manage in the range of 180 activities per patient per day—from replacing intravenous fluids, to administering drugs, to monitoring patients’ vital signs. In addition, rising administrative burdens and inefficient workflows mean that hospital nurses spend only about 30 percent of their time in direct patient care.\”
\”As illustrated in Figure 2-8, a medication order at one academic medical center can be filled in 786 different ways, involving a number of different health care professionals and technological channels. Another study found that inefficient medication administration practices at one hospital caused nurses to waste 50 minutes per shift looking for the keys to the narcotics cabinet …
\”Another study found that in a single year in fee-for-service Medicare, the typical primary care physician had to coordinate with 229 other physicians in 117 different practices. Further, the rate at which physicians refer patients has doubled over the past decade, and the number of primary care visits resulting in a referral has increased by nearly 160 percent …\”
\”Projections are for 90 percent of office-based physicians to have access to fully operational electronic health records by 2019, up from 34 percent in 2011 …\”
In short, the sharply rising complexity of what modern medicine can do, interacting with systems of health care provision and health care finance that were already growing outdated by the tail end of the 20th century, is costing enormous amounts and failing to deliver. I know full well the difficulties of collecting, organizing, and feeding back health information, and of reorganizing existing health care systems. But the financial and health costs of failing to push for such change are enormous. And frankly, both the 2010 health care legislation and the Republican alternatives offer little more than tinkering around the edges and hoping for the best, while the U.S. health care system runs amok.
The U.S. Census Bureau has published \”Small Area Health Insurance Estimates (SAHIE): 2010 Highlights.\” The document itself is only eight pages long with a few tables and figures. But it links to a nice interactive tool that lets you look at those with or without health insurance broken down by income, sex, race, age, and state or country. You can either generate data tables or maps.
Here\’s a basic table showing the share of those below 138% of the poverty line who lack health insurance on a state-by-state basis. The table is striking because, after all, Medicaid provides public health insurance for many of those below the poverty line. But it\’s worth remembering, as I discuss in an earlier post on \”Medicaid in Transition,\” that Medicaid is aimed at the \”deserving\” poor, which covers low-income families with children, along with the poor who were also disabled or elderly, but it doesn\’t automatically cover everyone below the poverty line. Even more, the vast majority of Medicaid\’s spending is not on low-income families of able-bodied adults with children, but instead on those with low incomes who are also blind, disabled, and elderly. Thus, this table is showing how many of the poor and near-poor don\’t have health insurance.
The table is also striking to me because it demonstrates the large size of cross-state differences in health insurance coverage. Some states have made a commitment to providing or encouraging health insurance in one way or another: for example, Hawaii, Massachusetts, Maine, and Vermont all have less than 20% of their under-65, under 138% of the poverty line population uninsured. Other states have not pushed hard in this direction: Florida, Nevada and Texas have more than 40% of their share of this specific population without health insurance.
Here\’s map generated with the SAFIE interactive data tool, showing the percentage of those lacking health insurance for all income levels on a county-by-county basis. The map shows that areas with relatively low proportions of those without health insurance are largely concentrated in the central and northeastern portions of the United States.
As I watch the election season unfold, it\’s interesting to me that many of the states which seem likely to support Obama are states that already have a relatively low population of those without health insurance, while many of those states that seem likely to support Romney have a relatively high population of those without health insurance. To some extent, the argument over providing health insurance for the uninsured is actually an argument that states which have not already taken steps to reduce the number of uninsured should be prodded and subsidized to do so.
One of the most infamous statements of technological pessimism is often attributed to Charles Duell, head of the U.S. Patent Office back in 1898, who allegedly proposed closing up the patent office because \”everything that can be invented has been invented.\”
But the quotation is apocryphal, as has been known for decades. Indeed, Duell gave a speech to Congress in 1899 about how America\’s economic future depended on innovation. However, there was apparently also an article in 1899 in the the comedy magazine Punch which was imagining a conversation where, at some point in the future, a genius asks about the patent office and a boy responds: \”Everything that can be invented has been invented.\”
The story came to mind because the Robert J. Gordon has a working paper out called \”Is U.S. Economic Growth Over? Faltering Innovation Confronts Six Headwinds?\” (It\’s National Bureau of Economic Research working paper #18315. NBER working papers aren\’t freely available on-line, but many academics will have access through their library systems.) Like all of Gordon\’s work, the main theses are genuinely thoughtful and provocative. But my eye was drawn to a paragraph about classic predictions of technological pessimism. Gordon writes:
\”There are four classic examples in the past of innovation pessimism that turned out to be wildly wrong. In 1876, an internal memo at Western Union, the telegraph monopolists, said, “The telephone has too many shortcomings to be considered as a serious means of communication.” In 1927, a year before The Jazz Singer, the head of Warner Brothers said, “Who the hell wants to hear actors talk?” In 1943, Thomas Watson, then president of IBM, said, “I think there is a world market for maybe five computers.” And in 1981, in the most famous of these ill-fated quotes, Bill Gates himself said in defense of the capacity of the first floppy disks, “640 kilobytes ought to be enough for anyone.”
The paragraph has no footnotes or citations, and in fact, technology writer Kevin Maney wrote an article in USA Today back in 2005 that debunked all four of these quotations: some didn\’t happen, some are taken at least partially out of context.
For example, that memo from Western Union about how the telephone would never work? The quotation was real enough, but it occurred when Bell was trying to sell his telephone patents to Western Union. Instead of buying the patents, Western Union tried to start its own telephone company, and its handset actually worked fairly well. But then Bell sued for patent infringement, and drove Western Union out of the market. In short, Western Union believed in the telephone and tried to become a phone company, and statements to the contrary were positioning for a patent battle.
And that statement about \”who the hell wants to hear actors talk?\” Turns out that Warner Bros. was investing heavily in music, thinking that musical scores would be more important than dialog. And indeed, \”The Jazz Singer\” was about to be a huge hit. Even today, one can argue that for a lot of movie hits, the background of sound is more important than any specific dialog spoken by the actors.
Watson\’s statement about how the world needs only five computers? There is no primary source or secondary source from that time frame which reports any such comment. And if you think about it, how plausible is it that the head of a computer company would announce that his world market was a total of five sales?
Bill Gates on how 640 kilobytes ought to be enough for anyone? Gates has always denied saying this, and there\’s no independent source which says he did. And again, how plausible is it that the visionary head of a company selling operating systems for computers would state that people have near-term limits on how much computer power or memory they would need?
In the journalism business, an ultra-convenient quotation is sometimes referred to as \”too good to check.\” I remember a Peggy Noonan column from a few years back in the Wall Street Journal that explained the concept by telling a classic Margaret Thatcher story. Noonan wrote:
\”The story as I was told it is that in the early years of her prime ministership, Margaret Thatcher held a meeting with her aides and staff, all of whom were dominated by her, even awed. When it was over she invited her cabinet chiefs to join her at dinner in a nearby restaurant. They went, arrayed themselves around the table, jockeyed for her attention. A young waiter came and asked if they\’d like to hear the specials. Mrs. Thatcher said, \”I will have beef.\”
Yes, said the waiter. \”And the vegetables?\”
\”They will have beef too.\”
Too good to check, as they say. It is certainly apocryphal, but I don\’t want it to be.\”
Of course, not all statements of technological pessimism are wrong. The great scientist Lord Kelvin did give a speech in 1895 declaring that \”heavier-than-air flying machines are impossible.\”
I\’ve always been struck that John Stuart Mill, one of the great economists of his time, wrote in his Principles of Political Economy in 1848 that the richer countries of the world already have plenty of material wealth, and there is little reason to desire or expect much more. Instead, the focus should be on a better distribution of income and on reducing the amount of work that is necessary. Here\’s Mill from Book IV, Chapter VI. I boldfaced one line in particular:
“[T]he best state for human nature is that in which, while no one is poor, no one desires to be richer, nor has any reason to fear being thrust back by the efforts of others to push themselves forward. … ”
“I know not why it should be a matter of congratulation that persons who are already richer than any one needs to be, should have doubled their means of consuming things which give little or no pleasure except as representative of wealth; or that numbers of individuals should pass over, every year, from the middle classes into a richer class, or from the class of occupied rich to that of the unoccupied. It is only in the backward countries of the world that increased production is still an important object; in those most advanced, what is economically needed is a better distribution … “
“It is scarcely necessary to remark that a stationary condition of capital and population implies no stationary state of human improvement. There would be as much scope as ever for all kinds of mental culture, and moral and social progress; as much room for improving the Art of Living, and much more likelihood of its being improved, when minds ceased to be engrossed by the art of getting on. Even the industrial arts might be as earnestly and as successfully cultivated, with this sole difference, that instead of serving no purpose but the increase of wealth, industrial improvements would produce their legitimate effect, that of abridging labor.”
If Mill, one of the truly towering minds of his time (or any time), could fall into the error of thinking there was no particular need to increase production from the levels in 1848–when per capita GDP in the U.S. was something like 1/20 of its current level–then anyone can fall into such an error. I am no expert in predicting future growth rates, and surely there are plenty of reasons to be pessimistic about the long-term growth trajectory for the U.S. economy. But I suspect that those who are alive 150 years from now will look back at the current standard of living for an average person and view it as extraordinarily and unbelievably low–much the same way that today we look back at the standard of living in 1848.