In Hume’s spirit, I will attempt to serve as an ambassador from my world of economics, and help in “finding topics of conversation fit for the entertainment of rational creatures.”
The Congressional Budget Office has put out another of its useful reports on the distribution of the tax burden across income levels: \”The Distribution of Household Income and Federal Taxes, 2008 and 2009.\” I suspect that many readers will see in these numbers an affirmation of their own beliefs about whether taxes on the rich should be either higher or lower. Back in 1938, Henry Simons of the University of Chicago wrote a book called Personal Income Taxation, where he commented: \”The case for drastic progression in taxation must be rested on the case against inequality — on the ethical or aesthetic judgement that the prevailing distribution of wealth and income reveals a degree (and/or kind) of inequality which is distinctly evil or unlovely.\”
I don\’t expect to persuade people about the extent to which they find inequalities of income to be \”evil or unlovely.\” But at least it would be useful if everyone was arguing from the same fact base. I\’ll focus my comments mostly on describing the situation of the top 1%, and let readers consider the rest of the data on their own.
For starters, here\’s a table showing average tax rates for different kinds of federal taxes, across income levels. Thus, the second column shows that the top 1% paid on average 21% of their income in federal income taxes. Conversely, the lowest quintile and the second quintile had negative income tax rates: that is, after taking into account refundable tax credits, their after-tax income was higher than their before-tax income. The top 1% paid 2.5% of income in social insurance taxes, a much lower rate, because Social Security taxes are only collected up to a certain income limit, which was $106,800 in 2009. The CBO estimates what income groups actually end up paying corporate taxes, and since those with high incomes own more stock, they pay a higher share of corporate taxes. The top 1% has income that is 5.2% lower because of corporate taxes. Federal excise taxes on gasoline, cigarettes, and alcohol weigh more heavily on those with lower income levels, and the top 1% pays 0.2% of its income in such taxes, compared with 1.5% for the lowest quintile.
Next, here\’s a table showing how the total level of federal taxes shifts the before-tax and after-tax distribution of income. The top 1%, for example, had average before-tax income of $1,219,700 in 2009. They paid 28.9% of that amount in federal taxes, so that this group had after-tax income of $866,700. The taxes paid by the top 1% were 22.3% of all taxes received by the federal government in 2009. The top 1% had 13.4% of total income on a before-tax basis, but 11.5% of total income on an after-tax basis.
Finally, here\’s a graph showing how the average federal tax rate has changed over time for those at different places in the income distribution. Again, it\’s worth noting that those with higher income levels do, on average, pay more than those with lower income levels. The top 1% saw the share of income that they pay in federal taxes fall sharply in the early 1980s, at the time when the Reagan tax cuts reduced top marginal tax rates. After taxes on this group were raised under the first couple of years of the Clinton administration and the stock market was taking off, their average rates increased again in the mid-1990s. After the Bush tax cuts in the early 2000s, average tax rates fell for all groups. In 2009, with large numbers of unemployed not earning income, the average tax rate for the lowest quintile dropped off dramatically.
The U.S. economy has been running very large trade deficits for the last . China and Germany, among others, have been running very large trade surpluses. Should this be a matter for concern? If so, why? As a starting point, here are figures showing the current account balances in the U.S., China, and Germany, from the ever-useful FRED website at the Federal Reserve Bank of St. Louis.
Of course, no economist would want to endorse the claim that nations should always have balanced trade. There are valid reasons for a nation to run trade deficits for a time (which means importing capital from abroad) or to run trade surpluses for a time (which means investing capital abroad). Long ago, the standard textbook example was that high income countries, where capital was relatively plentiful, should run trade surpluses and invest the funds in low-income countries, where capital was relatively scarce. With the U.S. economy running huge trade deficits and China\’s economy running huge surpluses, that scenario is now standing on its head.
In the June 2012 issue of Finance and Development, Mohamed A. El-Erian offers a nice essay expressing the conventional wisdom about large and persistent trade imbalances.
\”There will eventually come a point when deficit nations will find it difficult to continue to spend massively more than they take in. Meanwhile, surplus countries will find that their persistent surpluses undermine future growth. For both sides, the imbalances will become unsustainable, with potentially serious disruption to the global economy. … The global imbalances are best characterized as being in a “stable disequilibrium.” They can persist for a while. But if they do, the global economy will continue to travel farther afield from the equilibrium associated with high global growth, sustainable job creation, and financial soundness …
Indeed, where most academics do not differ is in their concern that persistent imbalances expose the global economy to sudden stops in investment flows, as happened in the fourth quarter of 2008. At that time funds ceased flowing to emerging markets and sought safe havens like U.S. government securities, which is what happened more recently in Europe. The extreme worries relate to currency fragmentation in Europe and worsening funding conditions for the United States. Both of these low-probability “tail events” entail catastrophic disruptions, with virtually no country in the world immune to negative spillover effects. Economists also point to mounting risks of currency wars and protectionism …\”
But as one digs down into the conventional wisdom on trade imbalances, the exact reason to worry about them is not as clear as one might like. Are the patterns of very large surpluses and deficits bad for their countries in and of themselves? Or is the danger that the trade imbalances may make global financial crises more likely? Or is the problem just that globalization has created larger and tighter linkages across countries, and the trade imbalances are not a central part of the story? Maurice Obstfeld takes on these kinds of questions in his Richard Ely lecture that was recently published in the May 2012 issue of the American Economic Review. (The AER is not freely available on-line, although many readers will have access through library subscriptions.)
Obstfeld makes clear that current account imbalances are not always a cause for concern. He writes: \”Before proceeding, I have to emphasize that just as the “consenting adults” framework claims, some current-account imbalances, even very large ones, can be justified in terms of economic fundamentals and do not pose threats to either the national or international economy. Such imbalances need not be a symptom of economic distortions elsewhere in the economy, but instead reflect reasonably forward-looking decisions of households and firms, based on realistic expectations of the future. Not all fall into this category, however, and the facts of the case are typically amenable to different interpretations–witness the debate over the global imbalances of the mid-2000s, notably the US deficit …\” (The Summer 2008 issue of my own Journal of Economic Perspectives had a pro and con on the U.S. trade imbalance, with Richard Cooper arguing that the U.S. trade deficits were a reasonable outcome of underlying economic forces, and Martin Feldstein arguing that they were an unsustainable situation and cause for concern.)
That said, Obstfeld offers three reasons for concern over current account deficits. First, current account imbalances may in some cases be a sign of an underlying economic problem; in particular, it may represent a surge of borrowing and credit that is fueling an unsustainable asset-market bubble. However, current account imbalances don\’t always signal an unsustainable credit boom, and credit booms can happen without a current account deficit. Obstfeld write:
\”Numerous crises have been preceded by large current-account deficits—Chile in 1981, Finland in 1991, Mexico in 1994, Thailand in 1997, the United States in 2007, Iceland in 2008, and Greece in 2010, to name just a few. But temporal priority does not establish causality, and the empirical literature of the last two decades has not established a robust predictive ability of the current account for subsequent financial crises (especially where the richer economies are concerned). There are cases in which even large current-account deficits have not led to crises, as noted above, and furthermore, several notable financial crises were not preceded by big deficits, including some of the banking crises in industrial countries during 2007–2009 (for example, Germany and Switzerland).\”
Obstfeld\’s second concern is that large trade deficits, when an economy is receiving an inflow of foreign capital, make an economy vulnerable to a \”sudden stop\” of that capital. I\’ve make this point before in a post about ways of illustrating the financial crisis of 2008. The graph shows the inflow of foreign capital to the U.S. economy. Notice first how this inflow of foreign capital rises dramatically through the 2000s, as the U.S. trade deficit plummets. But then focus on what happened during the financial crisis in late 2008 and early 2009: those inflows of financial capital not only went away, but actually turned into an outflow for a time. The inflows of foreign financial capital have since returned–but the vulnerability of the U.S. economy to a \”sudden stop\” of capital inflows is clear.
Obstfeld\’s third point is that current account imbalances may be a signal of larger financial imbalances. He writes:
\”Global imbalances are financed by complex multilateral patterns of gross financial flows, flows that are typically much larger than the current-account gaps themselves. These financing patterns raise the question of whether the generally much smaller net current-account balance matters much any more, and, if so, when and how. … In the mid-1970s, gross financial flows were much smaller than trade flows, but the former have grown over time and on average now are of comparable magnitude to trade flows. …
I will also argue that while policymakers must continue to monitor global current accounts, such attention is far from sufficient to ensure global financial stability. … [L]arge gross financial flows entail potential stability risks that may be only distantly related, if related at all, to the global configuration of saving-investment discrepancies. Adequate surveillance requires not only enhanced information on the nature, size, and direction of gross global financial trades, but better understanding of how those flows fit together with economic developments (including current-account balances) in the world’s economies, both rich and poor.\”
In the conclusion, Obstfeld writes: \”To my mind, a lesson of recent crises is that globalized financial markets present potential stability risks that we ignore at our peril. Contrary to a complete markets or “consenting adults” view of the world, current-account imbalances, while very possibly warranted by fundamentals and welcome, can also signal elevated macroeconomic and financial stresses, as was arguably the case in the mid-2000s. Historically large and persistent global imbalances deserve careful attention from policymakers, with no presumption of innocence.\”
Aaron Steelman has an \”Interview with John B. Taylor\” in the First Quarter 2012 issue of Region Focus, published by the Richmond Federal Reserve. The interview touches on a number of topics, but here, I\’ll focus on the \”Taylor rule\” and monetary policy. The questions that follow are my own phrasing: the answers are from the interview. (For the record, I\’ve known John Taylor on a professional level for many years and have considerable respect for his work, but the fact that we share a last name is pure coincidence!)
What is the \”Taylor rule\” for monetary policy?
\”The rule is quite simple. It says that the federal funds rate should be 1.5 times the inflation rate plus .5 times the GDP gap plus one. The reason that the response of the fed funds rate to inflation is greater than one is that you want to get the real interest rate to go up to take some of the inflation pressure out of the system. To some extent, it just has to be greater than one — we really don’t know the number precisely. One and a half is what I originally chose because I thought it was a reasonably good benchmark.\”
How closely has the Fed followed the \”Taylor rule\”?
\”The biggest period where the deviations are apparent is the 1970s. … I also think there were significant deviations from the rule from 2003 to 2005, when basically there were rate cuts greater than I think any reasonable interpretation of the rule would suggest. So I think the period when the rule was followed fairly closely was roughly from the 1980s through 2003. The way I think about it is that the Fed’s actions have been largely consistent with the rule without using it explicitly. … In the late 1990s Chairman Greenspan told me that it explained about 80 percent of what they were doing during his tenure, but that doesn’t mean that he was looking at it explicitly.\”
What are the dangers of the nonconventional monetary policies that the Fed has used since 2008?
\”During the worst of the 2008 panic, the Fed also provided funds that increased the balance sheet and if it had stuck to the exit policies that it pursued following 9/11 [when the Fed first increased and then reduced reserves], those reserves would have been reduced pretty quickly. But instead the Fed moved after the panic into interventions in the mortgage market and the medium-term Treasury market. … [I]n the early part of 2009, Don Kohn [then vice-chairman of the Federal Reserve] was on a panel with me at a conference; I argued that while the Fed can talk about these temporary interventions during the panic, I would worry that if the recovery is slow, it will continue to do these sorts of things — not because there is a liquidity problem, but just because the economy is still sluggish. Kohn said, no we won’t do that. But that, in fact, was what the Fed did.
\”So now we have a situation where there are massive interventions that are not conventional monetary policy and we need to get away from that. However, I’m not sure the Fed will get away from such policies, because now people are writing papers, including academic papers, which say the Fed can and should do these things: It can have its role in terms of setting the interest rate and it also can use its balance sheet to supposedly stimulate growth. The reason it can do that, people argue, is that the Fed now pays interest on reserves and thus it can ignore the supply and demand for money or reserves when setting the interest rate. I think that is not a good approach. It is very unpredictable and it will inherently raise questions about the independence of the Fed. So I would like the Fed to go back to a world where the interest rate is determined by the supply and demand of reserves. That would prevent this extra instrument from playing such a big role.\”
\”The other thing that happened during this episode was that the interest rate got to the zero lower bound. That generated this idea that something else had to be done, that the balance sheet had to increase a lot. That is not the implication. The implication is that when the interest rate is at the zero lower bound, you should make sure money growth doesn’t fall. Whatever aggregate you look at, you need to make sure it doesn’t decline. That is much different than massive quantitative easing.\”
A few years back, my family took a vacation trip to the Canadian Rockies and Banff National Park. We took the train from Minneapolis out to northern Montana: turns out that they have a \”family car\” that sleeps five. Then we rented a car and started exploring. After about an hour on the road, I noticed a bumper sticker on a car with Montana license plates that said \”GNP,\” inside a circle. I kept looking, and soon spotted others.
I began wondering why GNP was on bumper stickers in Montana. Of course, I knew that the U.S. government had shifted over from emphasizing Gross National Product to emphasizing Gross Domestic Product a couple of decades ago. But was there something about the economy of the state of Montana that would make GNP a more attractive choice? I don\’t know much about Montana\’s state economic issues. It has a lot of mining, right? Is there some reason why the presence of mining companies based outside the state might mean that there is a divergence between GNP and GDP in a way that would matter to the state of Montana?
I couldn\’t figure out any obvious answer, and so I worried about these bumper stickers on and off for a couple of days, as we hiked around Glacier National Park. And then when picking up some maps of hiking trails in gift shop, I realized that in Montana, GNP is Glacier National Park. So I had to buy the hat:
For previous episodes of when I or others have been unable to leave economics behind on vacation, see this post about tasting high-end olive oil and this post about hiking in Yosemite.
Some years ago I found myself giving a talk at a university in South Africa, where I discovered that I had apparently been typecast in the role of Defender of Capitalist and Colonialist Oppression. A commonly heard claim in the room was that the U.S. had a high standard of living mainly because it oppressed South Africa, and countries like South Africa. I found myself trying to explain the long-run roots of economic growth: growth of human capital, physical capital, and technology, operating in a market-oriented environment. I pointed out that in modern South Africa, the average adult at present had about 5-6 years of schooling. In the United States, there had been widespread primary schooling back in the mid-19th century, a \”high school\” movement that spread education further in the early 20th century, and the a burst of college enrollments after World War II. I pointed out that other countries during the last couple of centuries, including Japan and the East Asian \”tigers\” and China, all built their economic growth on base of expanded mass education. My point was not to deny that buying commodities cheaply has benefited the U.S. and other high-income economies, but to point out that economic growth and the resulting standard of living have roots so much deeper and broader than cheap commodities.
But the notion of a healthy and growing U.S. economy built on steadily rising levels of education is getting to be a few decades out of date. In the July 2012 issue of the Journal of Economic Literature, Daron Acemoglu and David Autor have a lengthy book review called, \”What Does Human Capital Do? A Review of Goldin and Katz’s The Race between Education and Technology.\” (The JEL is not freely available on-line, but many in academia will have access through their library.) The review makes a number of useful and sometimes subtle points about the interactions between education, skill, wages, inequality, and growth. Here, I\’ll just focus on their basic point about educational attainment in the United States.
High school graduate rates in the U.S. levelled out rose sharply in the first part of the 20th century, but levelled out several decades ago. They write: \”Figure 7, which plots high school completion rates at age 35 by birth cohort for U.S. residents born between 1930 and 1975, shows that the secular trend increase in overall high school graduation rates prevailing since at least 1890 … sharply decelerated starting with the 1948 birth cohort and then plateaued with the 1952 birth cohort. It showed no trend improvement over the subsequent three decades.\”
College graduation rates have risen a bit in recent decades, but the increased is completely due to gains in college graduation rates by women. Acemoglu and Autor write: \”Figure 8, which similarly plots college completion rates at age 35 by birth cohort, reveals an equally discouraging inter-cohort trend in college completions. The aggregate college graduate rate peaked with the 1951 birth cohort and did not begin to rise again until the 1966 birth cohort completed college. Despite the surge in the college [wage] premium … there has not been a robust supply response among recent cohorts.\”
These two figures also break down the overall rate (blue line) into a rate for males (red line) and females (green line). For men, high school graduation rates are lower for men born in the 1970s than they were for men born in the 1950s. For men, college graduation rates have rebounded a bit, but still haven\’t returned to the level for men born around 1950.
These graphs measure the quantity of people graduating, but there is little reason to believe that the quality of graduates is improving, either. Acemoglu and Autor: \”[T]he United States is also lagging behind in terms of school quality, particularly in K–12. Goldin and Katz are careful to note that AP calculus students in the United States compare favorably with the advanced mathematics students in almost any country, while the average U.S. student lags behind the average student in most OECD countries in math and science. This quality deficiency is almost as worrying as the lack of progress in the high school and college graduation margin.\”
When I was arguing about comparative human capital trends in front of a university audience in South Africa, little was really at stake but debating points. But for the U.S. economy as a whole, the fact that educational achievement levels have flattened out in terms of quantity and quality, so that the U.S. is now falling behind in international comparisons, poses and enormous risk both for the distribution of gains across the U.S. economy and for long-term U.S. growth prospects.
For a couple of earlier posts on how other countries are outstripping the U.S. in college attendance, see my May 23, 2012, post here and my July 19, 2011 post here. For a discussion of the causes of rising wage inequality that draws heavily on the Goldin-Katz argument, see my July 18, 2011, post here. (Full disclosure: One of the authors of the JEL article, David Autor, is the editor of the Journal of Economic Perspectives, and thus he is my boss.)
The Pew Center on the States has published \”The Widening Gap Update\” about the shortfalls between what state pension funds have promised and the actual funds on hand. The report also includes some useful information on the extent to which states are prefunding their health care plans for retirees. The report makes for grim reading–and even so, it\’s probably over-optimistic.
The headline number is $1.38 trillion: \”In fiscal year 2010, the gap between states’ assets and their obligations for public sector retirement benefits was $1.38 trillion, up nearly 9 percent from fiscal year 2009. Of that figure, $757 billion was for pension promises, and $627 billion was for retiree health care.\”
Some states are doing better than others on funding pension benefits: Wisconsin is 100% funded; Illinois and Rhode Island are less than 50% funded. Overall, 11 states are more than 90% funded, but 32 states are less than 80% funded.
In general, states are doing much less on pre-funding retiree health benefits. Only Alaska and Arizona have funded more than half of their retiree health benefits. Overall, only 7 states have funded more than 25% of the health care benefits they have promised to retirees.
Many states have negotiated unrealistic promises, and those promises are unlikely to be kept in full. Indeed, Pew calculates that from 2009-2011, 43 states changed their pension plans to hold down future costs.\”The most common actions included asking employees to contribute a larger amount toward their pension benefits; increasing the age and years of service required before retiring; limiting the annual cost-of-living (COLA) increase; and changing the formula used to calculate benefits to provide a smaller pension check.\”
These changes mostly affect current employees who have not yet retired, but not always. \”Since 2010, 10 states—Arizona, Colorado, Florida, Maine, Minnesota, New Jersey, Oklahoma, Rhode Island, South Dakota, and Washington—have frozen, eliminated, or trimmed their annual COLA increase for current retirees.\”
Pew also counts 11 states that altered their policies on retiree health benefits from 2009-2011. Some examples include extending the time that a worker must be a state employee before qualifying for such benefits (Delaware), increasing employee contributions for retiree health benefits (New Jersey), and capping the subsidy for retiree health benefits (New Hampshire).
A shortfall of $1.38 trillion is obviously enormous, but it also substantially understates the size of the gap. To figure out whether states have set enough money aside for future payments, it\’s necessary to make some assumption about what return will be earned by the money that has been set aside. Most states assume an average return of 8% per year–which seems pretty optimistic, and lets the states get away with putting much less money aside in the present. A more realistic rate of return would make state pension plans look perhaps one-third less well-funded. Here\’s Pew:
\”The pension ratings are based on a state’s projected investment rate of return, which for most states is 8 percent. States factor in their expected investment gains when they estimate how much they need to set aside. The Governmental Accounting Standards Board (GASB) is considering new rules that would prompt many states to use a lower rate of return to estimate their bill coming due, which would increase the liabilities states acknowledge. If these rules are adopted, as expected, retirement plan funding ratios would drop, increasing reported pension plan shortfalls. The Center for Retirement Research at Boston College analyzed a database of state and local plans and found that if the new rules had been in effect in 2010, those plans’ funding levels would have dropped from 77 percent funded to 53 percent.\”
The June 2012 issue of the Region, from the Federal Reserve Bank of Minneapolis, has an insightful interview of Darrell Duffie by Douglas Clement. It\’s worth reading the whole thing, but here are a few of the highlights that caught my eye. Except where noted, all quotations are from Duffie.
_____________________________
Developing a theory of inattentive investors and market volatility
\”In the ideal world, we’d all be sitting at our terminals watching for every possible price distortion caused by demands for immediacy. We’d all jump in like piranhas to grab that, we’d drive out those price distortions and we’d have very efficient markets. But in the real world, you know, we all have other things to do, whether it’s teaching or interviewing economists or whatever, and we’re not paying attention.
\”So we do rely on providers of immediacy, and we should expect that prices are going to be inefficient in the short run and more volatile than they would be in a perfectly efficient market, but in a natural way. I have been studying markets displaying that kind of price behavior to determine in part how much inattention there is or how much search is necessary to find a suitable counterparty for your trade.\”
As a vivid example of how even professional investors aren\’t always attentive, a footnote to the interview relates one of Duffie\’s anecdotes on this subject: \”In his American Finance Association presidential address, Duffie refers to a Wall Street Journal article (Feb. 19, 2010) that reported, “Investors took time out from trading to watch [Tiger] Woods apologize for his marital infidelity. …New York Stock Exchange volume fell to about 1 million shares, the lowest level of the day at the time in the minute Woods began a televised speech. …Trading shot to about 6 million when the speech ended.”
_______________________________________
Reducing the need for future government guarantees on investments at money market mutual funds
In September 2008, one of the events that created panic in financial markets was when a money market, the Reserve Primary Fund, seemed likely to announce that it had lost money. Investors pulled $300-$400 billion out of money market funds in two weeks, until the government guaranteed the value of your principal in these funds. What might be enacted so that money market mutual funds don\’t face runs in the future? Duffie explains:
\”One of those proposals is to put some backing behind the money market funds so that a claim to a one-dollar share isn’t backed only by one dollar’s worth of assets; it’s backed by a dollar and a few pennies per share, or something like that. So, if those assets were to decline in value, there would still be a cushion, and there wouldn’t be such a rush to redeem shares because it would be unlikely that cushion would be depleted. That’s one way to treat this problem.
\”A second way to reduce this problem is to stop using a book accounting valuation of the fund assets that allows these shares to trade at one dollar apiece even if the market value of the assets is less than that. … That’s called a variable net asset value approach, which has gotten additional support recently. Some participants in the industry who had previously said that a variable net asset value is a complete nonstarter have now said we could deal with that. …
\”A third proposal, which has since come to the fore, is a redemption gate: If you have $100 million invested in a money market fund, you may take out only, say, $95 million at one go. There will be a holdback. If you have redeemed shares during a period of days before there are losses to the fund’s assets, the losses could be taken out of your holdback. That would give you some pause before trying to be the first out of the gate. In any case, it would make it harder for the money market fund to crash and fail from a liquidity run. …
\”The SEC has a serious issue about which of these, if any, to adopt. And it’s getting some push-back not only from the industry, but even from some commissioners of the SEC. They are concerned—and I agree with them—that these measures might make money market funds sufficiently unattractive to investors that those investors would stop using them and use something else. That alternative might be better or might be worse; we don’t know. It’s an experiment that some are concerned we should not run. … I feel sympathy for the SEC. It has a tough decision to make.\”
___________________________________
Addressing instability in the repo market with a public utility for tri-party clearing
The market for repurchase agreements (\”the repo market\”) was one of the financial markets that seized up during the financial crisis in late 2008 and early 2009. Repo agreements are very short-term borrowing, often overnight. But as a result, those using such agreements often want to borrow during the day, as well. The financing for intraday trading is done by \”tri-party clearing banks,\” and essentially all of the tri-party deals in the U.S. are handled by two banks: JPMorgan Chase and Bank of New York Mellon.
\”JPMorgan Chase and Bank of New York Mellon handle essentially all U.S. tri-party deals. As part of this, they provide the credit to the dealer banks during the day. Toward the end of the day, a game of musical chairs would take place over which securities would be allocated as collateral to new repurchase agreements for the next day. All of those collateral allocations would get set up and then, at the end of the day, the switch would be hit and we’d have a new set of overnight repurchase agreements. The next day, the process would repeat.
This was not satisfactory, as revealed during the financial crisis when two of the large dealer banks, Bear Stearns and Lehman, were having difficulty convincing cash investors to line up and lend more money each successive day. The clearing banks became more risk averse about offering intraday credit. … [T]he amounts of these intraday loans from the clearing banks at that time exceeded $200 billion apiece for some of these dealers. Now they’re still over $100 billion apiece. That’s a lot of money. …\”
\”The tri-party clearing banks are highly connected, and we simply could not survive the failure of probably either of those two large clearing banks without an extreme dislocation in financial markets, with consequential macroeconomic losses. So if you take, for example, the Bank of New York Mellon, it really is too interconnected to fail, at the moment. And that’s not a good situation. We should try to arrange for these tri-party clearing services to be provided by a dedicated utility, a regulated monopoly, with a regulated rate of return that’s high enough to allow them to invest in the automation that I described earlier.\”
________________________________________
What financial plumbing should we be working on now, so that the chance of a future financial crisis is reduced?
\”And there has been a lot of progress made, but I do feel that we’re looking at years of work to improve the plumbing, the infrastructure. And what I mean by that are institutional features of how our financial markets work that can’t be adjusted in the short run by discretionary behavior. They’re just there or they’re not. It’s a pipe that exists or it’s a pipe that’s not there. And if those pipes are too small or too fragile and therefore break, the ability of the financial system to serve its function in the macroeconomy—to provide ultimate borrowers with cash from ultimate lenders, to transfer risk through the financial system from those least equipped to bear it to those most equipped to bear it, to get capital to corporations—those basic functions which allow and promote economic growth could be harmed if that plumbing is broken.
\”If not well designed, the plumbing can get broken in any kind of financial crisis if the shocks are big enough. It doesn’t matter if it’s a subprime mortgage crisis or a eurozone sovereign debt crisis. If you get a big pulse of risk that has to go through the financial system and it can’t make it through one of these pipes or valves without breaking it, then the financial system will no longer function as it’s supposed to and we’ll have recession or possibly worse.\”
Some of the preventive financial plumbing that Duffie emphasizes would include (in the words of the interviewer): \”broadening access to liquidity in emergencies to lender-of-last-resort facilities,\” \”engaging in a deep forensic analysis of prime brokerage weakness during the Lehman collapse, \” \”tri-party repo markets,\” \”wholesale lenders that might gain prominence if money market funds are reformed and therefore shrink,\” \”cross-jurisdictional supervision of CCPs [central clearing parties],\” and \”including foreign exchange derivatives in swap requirements.\”
Bringing the plumbing up to code in an older house is no fun at all, and bringing the economy\’s financial plumbing up to code is not much fun, either. But having the plumbing break under when stressful but predictable events occur is even less fun.
It sometimes seems to me that the arguments over carbon emissions and the risk of climate change have crowded out attention to other environmental issues–including other types of air pollution. Thus, I was intrigued to see the article by Drew Shindell called \”Beyond CO2: The Other Agents of Influence,\” in the most recent issue of Resources magazine from Resources for the Future. Shindell focuses on the benefits of reducing soot (more formally known as \”black carbon\”) and methane emissions (which are a precursor to more ozone in the atmosphere), and identifies which emissions to go after. He is reporting the results of a study with a larger group of authors appeared here in the January, 13, 2012, issue of Science magazine. I\’ll quote from both articles here.
The starting point for this group was to note: \”Tropospheric ozone and black carbon (BC) are the only two agents known to cause both warming and degraded air quality.\” Thus, \”an international team of researchers, including experts from the Stockholm Environment Institute, the Joint Research Centre of the European Commission, the U.S. Environmental Protection Agency, and others\” looked at 400 different policies for potentially reducing these emissions.
Here\’s a capsule overview of the effects of soot and methane from the Resources article:
\”When the dark particles of black carbon absorb sunlight, either in the air or when they accumulate on snow and ice and reduce their reflectivity, they increase radiative forcing (a pollutant’s effect on the balance of incoming and outgoing energy in the atmosphere, and the concept behind global warming), and thus cause warming. They can also be inhaled deeply into human lungs, where they cause cardiovascular disease and lung cancer.
\”Methane has a more limited effect than black carbon on human health, but it can lead to premature death from the ozone it helps form. That ozone is also bad for plants, so methane also reduces crop yields. It is a potent greenhouse gas as well, with much greater potential to cause global warming per ton emitted than CO2. But its short atmospheric lifetime—less than 10 years, versus centuries or longer for CO2—means that the climate responds quickly and dramatically to reductions. CO2 emissions, in contrast, affect the climate for centuries, but plausible reductions will hardly affect global temperatures before 2040.\”
The group looked at costs and benefits to whittle down the 400 measures and eventually selected 14 of them. \”Of the 14 measures selected, 7 target methane emissions (from coal mining, oil and gas production, long-distance gas transmission, municipal waste and landfills, wastewater, livestock manure, and rice paddies). The other 7 controls target black carbon emissions from incomplete combustion and include both technical measures (for diesel vehicles, biomass stoves, brick kilns, and coke ovens) and regulatory measures (for agricultural waste burning, high-emitting vehicles, and domestic cooking and heating).\”
The potential gains from the policies that they advocate are shown in this table from Science. The first column of the table shows gains from reducing methane, the second shows gains from the technical fixes for soot emissions and the third shows gains from regulatory measures for reducing soot emissions. The first few rows are physical effects: in particular, you can see that methane the emissions have a bigger effect on crops, but soot emissions have a much larger effect on lives saved. The remaining rows then put monetary values on the reduction in emissions.
The Science article sums up: \”We identified 14 measures targeting methane and BC [black carbon] emissions that reduce projected global mean warming ~0.5°C by 2050. This strategy avoids 0.7 to 4.7 million annual premature deaths from outdoor air pollution and increases annual crop yields by 30 to 135 million metric tons due to ozone reductions in 2030 and beyond.\” Indeed, a \”combination of measures to control black carbon, methane, and CO2 could keep global mean warming at less than 2ºC (relative to the preindustrial era) during the next 60 years—something that reducing the emissions of any one agent cannot achieve by itself.\”
The authors also find that the benefits of such policies far outweigh the costs. \”Benefits of methane emissions reductions are valued at $700 to $5000 per metric ton, which is well above typical marginal abatement costs (less than $250).\” For soot, \”improved efficiencies lead to a net cost savings for the brick kiln and clean-burning stove BC measures. These account for ~50% of the BC measures’ impact. The regulatory measures on high-emitting vehicles and banning of agricultural waste burning, which require primarily political rather than economic investment, account for another 25%. Hence, the bulk of the BC measures could probably be implemented with costs substantially less than the benefits given the large valuation of the health impacts.\”
The policy agenda for soot and methane is daunting in practical and political terms. For example, it requires measures that affect rice paddies, fossil fuel production and transmission, animal manure, brick kilns, diesel stoves, indoor cooking, and other areas. The agenda is worldwide, and those who receive the benefits will often not align well with those who are likely to end up footing the costs. The Resources article points out that an international coalition involving Canada, Sweden, Mexico, Ghana, Bangladesh, the United States, and the United Nations Environment Programme has embarked on a program to reduce black carbon and methane emissions. The Climate and Clean Air Coalition to Reduce Short-Lived Climate Pollutants is just getting underway.
Faithful reader M.R. was looking through archives and found my first post of May 17, 2011, on \”Two ways of illustrating the financial crisis\” and the follow-up post of August 3, 1011, on \”Four More Ways of Illustrating the Financial Crisis.\” He writes: \”[M]y reason for this email is to differ over your 2 + 4 ways of illustrating the financial crisis. To me the illustration is a graph I have never seen — hint to one who knows the literature much better than I — a graph of the market value of residential real estate.\” Here are some illustrative graphs made with the ever-useful FRED website of the Federal Reserve Bank of St. Louis, all using data from the Federal Reserve\’s Flow of Funds accounts.
What is the drop in the total market value of residential real estate? Here\’s the data series on :\”Real Estate – Assets – Balance Sheet of Households and Nonprofit Organizations.\” The drop is from a shade over $25 trillion in October 2006 to $18.2 trillion in October 2011. And yes, you do see a little upward wiggle at the right-hand end of the line, a rise to $18.6 trillion in the most recent data for January 2012.
For many households, of course, what matters is not the total value of your property, but how much equity you have in the property. For example, if the value of your house declined from $320,000 to $240,000, but your outstanding mortgage is $250,000, it\’s not all that comforting to notice that your home still has a positive market value overall. Here\’s the graph showing \”Owner\’s Equity in Household Real Estate.\” Equity peaked at $13.5 trillion in January 2006, and dropped to $6.2 trillion in October 2011–a drop of more than half.
Both of these time series are in nominal dollars. In the next two graphs, I\’ve divided them by nominal GDP, which has the effect of adjusting both for inflation and for real growth of the economy over time. First, here is the total value residential real estate divided by GDP.
Just to be clear, there is no reason why the share of housing compared to GDP should be a constant over time. Rising incomes will presumably cause people to consume more housing, but the pace at which total housing consumption rises could be faster or slower than the growth of the economy over time. If you squint at the figure and twist your head a bit, one can imagine that the total value of housing as a share of GDP is rising gradually over time, but with a lot of fluctuations. Clearly, the price spike in the mid-2000s was far outside the pattern of any such long-run historical trend.
The final graph is owner\’s equity in housing divided by GDP. It\’s interesting to observe that housing equity relative to GDP became a more important asset from the mid-1960s up through the late 1980s. A lot of the advice one used to hear about buying a house early in life was from people living through that experience. Through much of the 1990s, the ratio of owner\’s equity to GDP fell, but in the early 1990s, that was partly a result of depressed regional real estate markets in certain states in the aftermath of the collapse of many savings and loan institutions in the late 1980s and early 1990s (which made the numerator of the ratio decline), and also a result of fast economic growth from the mid-1990s on (which made the denominator of the ratio rise). Again, the bottom line is that the spike in the total value of housing that ended around 2006 is well outside the post-World War II historical experience. And the drop since 2006 takes this ratio from by far its highest value since 1950 to by far its lowest value since 1950.
Of course, the financial crisis itself is more than just drop of nearly $7 trillion in housing values. It\’s also about what forces created the housing bubble in the first place, and how the bursting of the bubble translated itself into weaknesses in the banking and financial system. But the rise and fall of housing values is a central part of what occurred.
A panel of the National Research Council headed by Daniel S. Nagin and John V. Pepper has published \”Deterrence and the Death Penalty.\” The report can be ordered or a free PDF can be downloaded here. The report refers back to a 1978 NRC report which concluded that \”available studies provide no useful evidence on the deterrent effect of capital punishment.\” The latest study reaches the same conclusion.
\”The committee concludes that research to date on the effect of capital punishment on homicide is not informative about whether capital punishment decreases, increases, or has no effect on homicide rates. Therefore, the committee recommends that these studies not be used to inform deliberations requiring judgments about the effect of the death penalty on homicide. Consequently, claims that research demonstrates that capital punishment decreases or increases the homicide rate by a specified amount or has no effect on the homicide rate should not influence policy judgments about capital punishment.\”
Why has the research found it so hard to sort out cause and effect in this situation? The NRC report emphasizes two reasons, but discusses a number of others:
1) One problem is that deciding on the effect of capital punishment requires saying, \”Compared to what?\” But existing studies don\’t pay enough attention to what the alternative punishment might have been. Here the NRC report: \”Properly understood, the relevant question about the deterrent effect of capital punishment is the differential or marginal deterrent effect of execution over the deterrent effect of other available or commonly used penalties, specifically, a lengthy prison sentence or one of life without the possibility of parole. One major deficiency in all the existing studies is that none specify the noncapital sanction components of the sanction regime for the punishment of homicide.\”
2) Another problem is that studies of the deterrent effect of capital punishment need to make some assumptions about how potential murderers perceive that penalty. Are they aware of how often it is imposed, and under what circumstance, and their actual chances of receiving the penalty? In many studies, these underlying assumptions are not spelled out clearly, or even at all.
3) There\’s a classic cause-and-effect problem in studying the deterrent effects of any criminal penalty, whether fines or imprisonment or capital punishment. Say that there is one jurisdiction where lots of crimes occur and another where not many crimes occur, for whatever reason. The first jurisdiction thus imposes lots of criminal penalties, and the other jurisdiction doesn\’t. In this situation, a naive statistical test will observe that high level of crime are correlated with high levels of penalties, and low levels of crime are correlated with low levels of penalties. But it would be foolish to argue that the levels of penalties are causing the levels of crime. Instead, one needs to figure out how increases or decreases in the level of penalties would affect levels of crime, which is a much harder question, especially because changes in penalties often occur as a reaction to levels of crime.
4) The statistical problem here is a difficult one. To help illustrate why, here\’s a graph of the number of death sentences and executions in the U.S. since 1974. Back in 1976, U.S. Supreme Court decisions had made nearly impossible for states to execute anyone, and the number of death penalty sentences was somewhat lower as well. Then the number of death penalty sentences rises, and a few years later the number of executions rises. In the last decade or so, the number of death sentences had dropped substantially, but the number of executions has dropped by less, surely because of that large number of death sentences originally given back on the 1980s and 1990. Referring back to the earlier point, an obvious question here is the extent to which potential murderers are aware of these patterns in sentencing and executions.
Next, here\’s a graph of homicide rates since 1974. Notice that it starts off at about 10 per 100,000 population, and then starting around 1990 drops off to about half that level. In other words, about a decade after capital punishment sentences rise, and at about the same time as the execution rate starts to rise, the homicide rate drops off. A naive statistical comparison between these patterns using national data might well suggest that higher levels of executions preceded a drop-off in murders.
But of course the deal penalty is not equally likely across states. New York state sentenced only 10 people to death from 1973-2009, and executed none during that time. California and Texas both sentences large numbers of people to death, but California actually carried out only 13 people from 1976-2009, while Texas executed 447 people. However, these differences in death sentences and actual executions seem to have very little effect on the murder rate in these three states, which essentially follows the national pattern in all three cases.
Of course, researchers in this area are fully aware of these difficulties in looking at the data over time and across states, and have applied a wide array of methods to this data. Given these issues, it is perhaps no surprise that the NRC report lists studies in the last decade which find that each execution deters five other murders, or 18 other murders, as well as studies that find that capital punishment deters no murders at all, and studies that find that the conclusions one draws from the data are quite fragile, depending on small differences in the statistical tests that are run.
Of course, the issue of whether capital punishment deters is not the only issue in making policy choices about capital punishment. The NRC report is careful to point out that it is not considering the moral arguments for or against capital punishment, nor is it looking at the arguments over whether the penalty is administered in a consistent or nondiscriminatory fashion. Personally, my own moral sense would not rule out the concept of the death penalty for the most extreme and egregious cases of murder. I do worry that its application seems to vary so greatly across jurisdictions, and across racial groups, and by the quality of the lawyers involved in the case. I also worry that in a world without capital punishment, those who have already committed crimes that could land them in prison for life (say, kidnapping) have an incentive to kill potential witnesses, because in a situation without capital punishment, there is no additional penalty for doing so.
The NRC report is careful to point out several times that a lack of solid empirical support for whether capital punishment deters doesn\’t prove that such an deterrent effect does not in fact exist. As the old saying among empirical researchers goes: \”Absence of evidence is not evidence of absence.\”
On the particular issue of the uncertainty of whether or how much capital punishment deters future murders, I struggle with a conundrum that was put to me many years ago. Take as a starting point that we aren\’t sure whether capital punishment deters or not, and we must make a choice whether to execute certain murderers or not. Then consider four possibilities: 1) we execute some murderers and it does deter others, so we save innocent lives; 2) we execute some murderers and it doesn\’t deter others; 3) we don\’t execute any murderers and it wouldn\’t have deterred anyone if we did; and 4) we don\’t execute any murderers but it would have deterred some future murderers if we had done so. Those who worry about executing those who don\’t really deserve such a grave punishment, or even who may be innocent, have a point. But if it\’s possible that capital punishment may deter, and we genuinely aren\’t sure, then we also need to take into account in our policy calculations the possibility that executing the most egregious murderers might save innocent lives.