This week before Labor Day, news about economics tends to be scarce, while academics and teachers are looking ahead to the next term. In that spirit, I\’m going to pass along some thoughts about teaching this week.
In the last year or two, many campuses and classroom have seen arguments that seemed to pit robust debate against \”safe spaces.\” I liked what Roxane Gay, a professor of English at Purdue University, had to say on this subject in a piece written for the New York Times last fall called “The Seduction of Safety, on Campus and Beyond” (November 13, 2015).
“On college campuses, we are having continuing debates about safe spaces. As a teacher, I think carefully about the intellectual space I want to foster in my classroom — a space where debate, dissent and even protest are encouraged. I want to challenge students and be challenged. I don’t want to shape their opinions. I want to shape how they articulate and support those opinions. I do not believe in using trigger warnings because that feels like the unnecessary segregation of students from reality, which is complex and sometimes difficult.
“Rather than use trigger warnings, I try to provide students with the context they will need to engage productively in complicated discussions. I consider my classroom a safe space in that students can come as they are, regardless of their identities or sociopolitical affiliations. They can trust that they might become uncomfortable but they won’t be persecuted or judged. They can trust that they will be challenged but they won’t be tormented.”
I like the definitiveness of Gay\’s statements that students will be challenged but won\’t be tormented, and the emphasis that a good class is built on trust.
This week before Labor Day, news about economics tends to be scarce, while academics and teachers are looking ahead to the next term. In that spirit, I\’m going to pass along some thoughts about teaching this week.
Like every teacher, I suppose, I\’ve had more than one talk with a student who said: \”I understand it all just fine in my mind, or when you say it or I read the textbook, but when I try to write it down, I just can\’t seem to say what I mean.\” One semester I had heard this line often enough that I posted on my door this rejoinder from Montaigne\’s essay \”On the Education of Children, written around 1579-1580.
\”I hear some making excuses for not being able to express themselves, and pretending to have their heads full of many fine things, but to be unable to bring them out for lack of eloquence. That is all bluff. Do you know what I think these things are? They are shadows that come to them of some shapeless conceptions, which they cannot untangle and clear up within, and consequently cannot set forth without: they do not understand themselves yet. And just watch them stammer on the point of giving birth; you will conclude that they are laboring not for delivery, but for conception, and that they are only trying to lick into shape this unfinished matter.\”
The quotation is from \”The Complete Works of Montaigne: Essays, Journal, Letters,\” as translated by Donald M. Frame (Hamish Hamilton; London, p. 125).
This week before Labor Day, news about economics tends to be scarce, while academics and teachers are looking ahead to the next term. In that spirit, I\’m going to pass along some thoughts about teaching this week. Let\’s start with some characteristically realistic (or even cynical?) comments from Adam Smith in the Wealth of Nations, from his discussion \”Of the Expense of Institutions for the Education of Youth\” (Book V, Ch. 1, Part III, Art. II).
Smith argues that when teachers don\’t have incentives to work on their teaching, then teachers will either \”neglect it altogether, or … perform it in as careless and slovenly a manner as that authority will permit.\” Moveover, if all the teachers come together in a college or university, they will support each other in their disdain for teaching: \”In the university of Oxford, the greater part of the public professors have, for these many years, given up altogether even the pretence of teaching.\” Smith further argues that all the discipline at colleges and universities is aimed at students, not teachers, and includes this comment: \”Where the masters, however, really perform their duty, there are no examples, I believe, that the greater part of the students ever neglect theirs. No discipline is ever requisite to force attendance upon lectures which are really worth the attending, as is well known wherever any such lectures are given.\”
Here are the extended passages from which these snippets are drawn:
\”In other universities the teacher is prohibited from receiving any honorary or fee from his pupils, and his salary constitutes the whole of the revenue which he derives from his office. His interest is, in this case, set as directly in opposition to his duty as it is possible to set it. It is the interest of every many to live as much as his ease as he can; and if his emoluments are to be precisely the same, whether he does, or does not perform some very laborious duty, it is certainly his interest, at least as interest is vulgarly understood, either to neglect it altogether, or, if he is subject to some authority which will not suffer him to do this, to perform it in as careless and slovenly a manner as that authority will permit. If he is naturally active and a lover of labour, it is his interest to employ that activity in any way, from which he can derive some advantage, rather than in the performance of his duty, from which he can derive none.
\”If the authority to which he is subject resides in the body corporate, the college, or university, of which he himself is a member, and in which the greater part of the other members are, like himself persons who either are, or ought to be teachers; they are likely to make a common cause, to be all very indulgent to one another, and every may to consent that his neighbour may neglect his duty, provided he himself is allowed to neglect his own. In the university of Oxford, the greater part of the public professors have, for these many years, given up altogether even the pretence of teaching. …
\”The discipline of colleges and universities is in general contrived, not for the benefit of the students, but for the interest, or more properly speaking, for the ease of the masters. Its object is, in all cases, to maintain the authority of the master, and whether he neglects or performs his duty, to oblige the students in all cases to behave to him as if he performed it with the greatest diligence and ability. It seems to presume perfect wisdom and virtue in the one order, and the greatest weakness and folly in the other. Where the masters, however, really perform their duty, there are no examples, I believe, that the greater part of the students ever neglect theirs. No discipline is ever requisite to force attendance upon lectures which are really worth the attending, as is well known wherever any such lectures are given.\”
Wealth is not income. Economists sometimes say that \”wealth is a stock, income is a flow.\” They mean that wealth is accumulated over time, and includes assets and debts. Income is what flows in during a give period of time, often measured during a week or a year. The most prominent data source for estimates of US family wealth is the Survey of Consumer Finance, conducted once every three years by the Federal Reserve. Data from the 2013 survey is the most recent available, and the the Congressional Budget Office has released a short report made up of figures and short commentaries showing Trends in Family Wealth, 1989-2013. Here are some snapshots:
The overall wealth of US families totalled about $67 trillion in 2013. As the figure shows, this total has more-or-less doubled since about 1995. Most of increase is attributable to a rising total for the top 10%, which means that the wealth distribution is clearly becoming more unequal over time.
Of course, no one should expect the distribution of wealth to be anything near equal. For example, those who are in their 60s, relatively near the end of their work life, really should have more in retirement accounts and home equity than, say, those in their early 20s–who may well have negative wealth due to student loans. Here\’s a breakdown of patterns in average wealth per family, divided up by age. I suspect that the reason that average wealth for the age 50-64 group climbed so rapidly during the early 2000s and then plunged during the recession has to do with this group being more affected by the rise and then the fall of real estate prices and the stock market. As one would expect, the average family in the under-35 age group has little wealth.
A related factor will be education level, given that education is positively correlated both with higher incomes from year to year (and thus a higher ability to save) and also with a greater ability to plan for the future. Here\’s the breakdown of wealth per family by education level.
In other words, families that have graduate degrees and are over 65 have seen a sharp rise in wealth. Families that are under-35 and with lower levels of education don\’t tend to have much wealth. This CBO report is really just providing data: it doesn\’t seek to analyze or explain underlying causes. But here\’s a breakdown of family wealth by percentile.
A family in the 90th percentile of the wealth distribution has seen a significant rise over time, although less than a doubling. A family in the 75th percentile has had a more modest rise since the early 1990s. Family wealth levels at the 50th and 25th percentiles haven\’t changes much in the last 25 years. Given that the rise in total wealth is so much less than a doubling at all of these percentiles, and given that total family wealth doubled from 1995 to 2013, there must be some percentiles where there is more than a doubling. Presumably these are the share of wealth at the very top of the wealth distribution, perhaps the 95th or 99th percentile. At some level, this result is unsurprising. Run-ups in stock prices and housing prices tend to raise the wealth of those who own the most of these assets.
The Federal Reserve reduced what had been its main policy-related interest rate, the federal funds reserve rate, down to a target range of 0% to 0.25% at the tail end of 2008, and kept it there until December 2015, when it raised the target range to 0.25% to 0.5%. With this interest rate essentially stuck in place for nearly eight years and still at near-zero levels, what should come next?
The status quo plan seems to call for the Fed to carry out slow and intermittent policy of raising the federal funds rate back to more usual levels over time. But there are two diametrically opposite proposals in the wind, both of them counterintuitive in some ways. One proposal is for the Fed to push its policy interest rate into negative territory. The other \”neo-Fisher\” proposal is for the Fed to raise interest rates as a tool that seeks to raise inflation. Both proposals have enough uncertainties that that I suspect the Fed will continue along its present path. But here\’s an overview of the two alternative approaches.
In some broader sense, it is of course true that financial markets have been accustomed to the idea of negative real interest rates for quite some time After all, back in the 1970s when inflation rose unexpectedly, lots of institutions found that their earlier loans were being repaid with inflated dollars–in effect, real interest rates turned out to be negative. Anyone investing in long-term bonds when the nominal interest rate is positive, but very low, needs to face the very real chance that if inflation rises, the real interest rate will turn out to be negative. If you take checking account fees into account, lots of people have had negative real interest rates on their checking accounts for a long time.
Cœuré offers a figure on government bond yields in different countries (which is not the same as the policy interest rate set by a central bank, but what investors are receiving when they buy a bond), at different maturities. Government bond yields are negative for bonds with shorter-term maturities in a number of countries, although not in the US or the UK.
As I see it, the events that led to negative policy interest rates can be expressed using the basic relationship named after the great long-ago economist Irving Fisher (1867-1947). Fisher pointed out that:
nominal interest rates = real interest rates plus inflation.
Thus, back in the mid-1990s, it was common for US government borrowing to pay a nominal interest rate of about 6% on a 10-year government bond, which one could think of as about 3-4% a real interest rate on a safe asset, and 2-3% the result of inflation. In the last 15 years or so, real interest rates have been driven ever-lower by global macroeconomic forces involving demographics, rates of saving and investment, rising inequality, an increased desire for ultra-safe assets, and other forces. (For a discussion, see this post on \”Will the Causes of Falling Interest Rates Unwind?\” from February 25, 2016). Inflation rates have of course been very low in the last few years as well, especially since the Great Recession and the sluggish recovery that followed.
When real interest rates and inflation are both very low, nominal interest rates will also be low. Thus, when a central bank wants to cut interest rates, it will very quickly reach an interest rate of 0%–and need to start thinking about the possibility of negative rates.
But although negative interest rates are now real, and not hypothetical, just how much they can accomplish remains unclear. After all, it seems unlikely that the reasons for a lack of lending and borrowing–whatever they are–will be profoundly affected by having the central bank policy interest rate shift from barely above 0% to barely below 0%. Thus, the question becomes whether to push the negative interest rates lower still. Cœuré discusses possible tradeoffs here.
For example, one issue is whether economic actors start holding large amounts of cash, which at least pays a 0% interest rate, to avoid the negative rates. At least at the current levels of negative interest rates, this hasn\’t happened. As noted already, actual real interest rates have often been negative in the past. Although Cœuré doesn\’t discuss this point in any detail, there\’s some survey evidence that people (and firms) may react to negative interest rate by saving more and having a bigger cash cushion, which would tend to work against such low rates stimulating borrowing and spending.
Instead, Cœuré focuses on the issue of whether there is what he calls an \”economic lower bound,\” where the potentially positive effects of negative interest rates in encouraging banks to lend are offset by potentially negative effects. He looks at evidence on the sources of profits for euro-area banks, and finds: \”In recent years, the distribution of these sources has been fairly stable, with approximately 60% of income coming from net interest income, 25% from fees and commissions and 15% from other income sources.\”
\”Net interest income\” basically means that a bank lends money out at an interest rate of, say, 3%, but then pays depositors an interest rate of, say, 1%–and thus earns revenue from the gap between the two. But as interest rates fall in general and in some cases go negative, the interest rate that banks can charge borrowers is dropping, but banks don\’t feel that they can pay negative interest rates to their depositors, so those interest rates–already near-zero–can\’t fall. Thus, the main source of bank revenues, net interest income, is squeezed. On the other side, the lower and negative interest rates also benefit banks in some ways, both on their balance sheets, and also because if the economy is stimulated, banks will face fewer bad loans. But as these processes unfold, the banking sector can be shaken up. Cœuré writes:
Indeed, analysts forecast a decline in bank profitability in 2016 and 2017, mainly due to lower net interest income. And the recent decline in euro area bank share prices can be at least partially ascribed to market concerns over future banks profitability. … As such, if very low or negative rates are here for a prolonged period of time due to the structural drivers highlighted above, banks might have to rethink their business models. The revenue structure of euro area banks was stable for a long time but it has recently begun to change and there is at least some evidence of banks tending to offer fee-based products to clients as substitutes for interest-based products.
The overall irony here is that there has been enormous concern in recent years over the risks of an unhealthy banking industry, and a common recommendation has been to require banks to build up their capital, so that they won\’t be as vulnerable to economic downturns and won\’t need payouts. But when a central bank uses a negative policy interest rate, it is saying that banks will not be receiving interest on a certain subset of their funds, but instead will be paying out money on a certain subset of their funds. When higher-ups at the European Central Bank start talking about how all the euro-area banks \”might have to rethink their business models,\” that\’s not a small statement. For some more technical research on how to think about when harms to the banking sector might outweigh other benefits, a useful starting point is a working paper by Markus K. Brunnermeier and Yann Koby, “The “Reversal Rate”: Effective Lower Bound on Monetary Policy,” presented at a Bank of International Settlements meeting on March 14, 2016.
There are other uncertainties about very low and negative-yield interest rates. If a government decides that it wants to borrow heavily for a fiscal stimulus, does this at some point become difficult to do if the government is offering very low or negative interest rates? Certain policy steps might make sense for a smaller economy like Switzerland, where the central bank is in part using the policy interest rate to affect its exchange rate, but might not make sense for the mammoth US economy. The Fed has no desire to risk setting off off a pattern of central banks around the world competing to offer ever-more-negative interest rates. Taking all this together, I expect that while negative policy interest rates will continue in Europe for some time, concerns about financial sector stability and other issues means that they are unlikely to be adopted by the US Federal Reserve. Fed vice-chair Stanley Fischer made a quick and dismissive reference to negative interest rates in a talk on the broader economy last week, saying that \”negative interest rates\” were \”something that the Fed has no plans to introduce.\” Of course, \”has no plans\” doesn\’t mean \”will never do it,\” but the current policy of the Fed calls for a slow rise its policy interest rate, not decline.
nominal interest rates = real interest rate + rate of inflation.
This relationship isn\’t mysterious. When anyone lends or borrows, they pay attention to the expected rate of inflation. Lots of people took out home mortgages at 12% and more back in the early 1980s, because inflation had recently been 10% and higher, so they expected to be able to pay off the mortgage in inflated dollars. In modern times, of course, it would seem crazy to borrow money at a 12% interest rate, given that inflation is down in the 1-2% range.
When the Federal Reserve raises the nominal interest rate, what does the Fisher relationship suggest could happen? One theoretical possibility is that the rate of inflation changes to match the change in the nominal interest rate. This theoretical possibility, called the \”neutrality\” of money, suggests that the Fed can raise or lower nominal interest rates all it wants, but there won\’t be any actual effect on real interest rates. However, the evidence suggests that monetary policy isn\’t completely neutral. Instead, if the Fed changes interest rates, then the real rate of interest also changes for a time, which is why monetary policy can affect the real economy. But if the real interest rate is ultimately determined as a market price in the global economy, then any change due to the central bank will be short-run, and the real interest rate will eventually return to its equilibrium level.
Based on this idea, here\’s a figure from Williamson to describe the theory of neo-Fisherism. On the far left, the red line shows the real interest rate, the green line shows the rate of inflation, and if you add these together you get the blue line that shows the nominal interest rate. At time T, the Fed raises the nominal interest rate. Because of the nonneutrality of money, the real interest rate r shown by the red line rises, too. But over time, the real interest rate is then pushed by underlying market forces back to its original equilibrium level. However, if the Fisher relationship must hold true, then the drop in the real interest rate must be matched by a rise in the inflation rate shown by the green line. (Those who want a more detailed mathematical/theoretical presentation of these arguments might begin with John Cochrane\’s Hoover Institution working paper from February 2016, \”Do Higher Interest Rates Raise or Lower Inflation?\”)
Neo-Fisherism offers a possible explanation of how some central banks around the world ended up in their current situation. The Fed cut interest rates, because it sought to stimulate the economy and avoid deflation. But by cutting interest rates, the Fed according to neo-Fisherism instead brought on lower inflation. The Bank of Japan has had rock-bottom interest rates for years, and negative interest rates recently, while inflation has stayed rock-bottom and even occasional deflation has occurred. Rock-bottom and even negative interest rates don\’t seem to have caused inflation in Europe, either. Williamson refers to this outcome as the \”low inflation policy trap.\”
However, neo-Fisherism also raises some obvious questions: Why are we trying to achieve a goal of higher inflation? Doesn\’t that jolt of rising real interest rates in the short-term risk causing a recession?
On the issue of inflation, there has long been a concern that deflation is a legitimate cause for worry. After all, deflation means that everyone who has borrowed money at a fixed interest rate in the past would now be facing a higher real interest rate. On the other side, a few percentage points of inflation would mean that everyone who borrowed at fixed interest rates in the past would be able to repay in inflated dollars, thus, reducing the burden of their debts. In addition, very low inflation combined with what have now become very low real interest rates means that the Fed can\’t react to a recession by cutting nominal interest rates–at least not without entering the swamp of negative interest rates. Williamson describes the arguments this way:
\”There are no good reasons to think that, for example, 0 percent inflation is worse than 2 percent inflation, as long as inflation remains predictable. But \”permazero\” damages the hard-won credibility of central banks if they claim to be able to produce 2 percent inflation consistently, yet fail to do so. As well, a central bank stuck in a low-inflation policy trap with a zero nominal interest rate has no tools to use, other than unconventional ones, if a recession unfolds. In such circumstances, a central bank that is concerned with stabilization—in the case of the Fed, concerned with fulfilling its \”maximum employment\” mandate—cannot cut interest rates. And we know that a central bank stuck in a low-inflation trap and wedded to conventional wisdom resorts to unconventional monetary policies, which are potentially ineffective and still poorly understood.\”
What about the risk that a spike in real interest rats could bring financial disruptions and even a recession? Williamson\’s essay doesn\’t discuss the possibility. Perhaps if the Fed described what it was going to do, and why it was going to do it, the spike in real interest rates might be smaller. But the risks of neo-Fisherism seem like a real danger to me. Among other changes, neo-Fisherism would be a very dramatic change in Fed policy, and for that reason alone it has the potential to be highly destabilizing at least in the short run.
So if the Federal Reserve is trapped between not wanting to go with negative interest rates on one hand, but also not believing in neo-Fisherism on the other hand as a justification for a dramatic rise in interest rates, what does it do. For some context, consider the unemployment rate (blue line) and inflation rate (red line) in the last few years. Unemployment has been drifting lower. The inflation rate was up around 2% in early 2014, but then dropped down to near-zero, in part as a result of the sharp fall in oil prices during that time. After that fall in oil prices was completed, inflation went back up to about 1% by late 2015. Not coincidentally, the Fed also raised its target for the federal funds interest rate in December 2015. The Fed is taking a wait-and-see stance, but my guess is that if inflation goes up to the range of about 2%, the Fed will view this as an opportunity to raise its nominal interest rate target a little higher. That is, instead of the neo-Fisherism view in which raisint the nominal interest rate brings on higher inflation, the Fed is instead letting inflation go first, and raising the interest rates afterward.
Essentially, the Fed is trying to creep back to a range in which nominal interest rates are closer to the historical range. But if real interest rates remain low, because of various evolutions in the global economy, then nominal interest rates can only be higher if the inflation rate rises by a few percentage points.
There are at least two big problems inherent in the way that the world has become accustomed to dealing with natural disasters in low- and middle-income , including of weather-related, disease-related, and geologically-related varieties. One problem is that disaster aid from donor nations often arrives too little and too late. The other problem is that it often becomes apparent, in considering the aftermath of a disaster, that relatively inexpensive precautionary steps could have substantially reduced the effects of the disaster. \”Catastrophe bonds\” (to be explained in a moment, if you haven\’t heard of them) offer a possible method to ameliorate both problems. Theodore Talbot and Owen Barder. provide an overview in \”Payouts for Perils: Why Disaster Aid Is Broken, andHow Catastrophe Insurance Can Help to Fix It\” (July 2016, Center for Global Development Policy Paper 087).
Here\’s a quick overview of the costs of natural disasters in recent decades. The green bars show estimates of deaths from natural disasters from the Centre for Research on the Epidemiology of Disasters (CRED), while the red bars show estimated deaths from the Munich Reinsurance Company. In especially bad years, deaths from natural disasters can reach into the hundreds of thousands.
Here are estimates from the same two sources for estimated damages from natural disasters. In especially bad years, the losses run into the hundreds of billions of dollars. The additional blue bars seek to estimate losses due to reduced human capital. You\’ll notice that the patterns of deaths and damages from natural disasters aren\’t perfectly correlated. One of the patterns in this data is that financial damages from natural disasters are typically higher in high-income countries, while deaths from natural disasters are often higher in low- and middle-income countries (for more on this point, see \”Natural Disasters: Insurance Costs vs. Deaths,\” April 16, 2015).
Disaster aid is falling short of addressing these costs along a number of dimensions. At the most basic level, it isn\’t coming close to covering the losses. The red area shows the costs of disasters. The blue area shows the size of disaster aid.
But insufficient funds are only part of the problem, as Talbot and Barder spell out in some detail. Aid is often too slow. Here is a sampling of their comments:
\”As Bailey (2012) has set out in a detailed study of the 2011 famine in Somalia, slow donor responses meant that what might have been a situation of deprivation descended into mass starvation. As he points out, this happened even though early-warning systems repeatedly notified the global public sector about the emergency … Mexican municipalities that receive payouts from Fonden, a natural disaster insurance programme, grow 2-4% faster than those that also experience a hazard but did not benefit from insurance cover, ultimately generating benefit to cost ratios in the range of 1.52 to 2.89.\”
\”For example, food aid is often the default mechanism donors use to address food shortages, even though it would often be cheaper, faster, and much more effective to provide cash to governments or directly to households, enabling markets to react …\”
\”Yet examining data on aid flows from 1973 to 2010 reported to the OECD by donors indicates that less than half a cent of the average dollar– just 0.43% – of disaster-related aid has been labelled as earmarked for reducing the costs future hazards (“prevention and preparedness”, which we refer to elsewhere using the standard label “disaster risk reduction”). Put differently, the vast majority of our funding is devoted to delivering assistance when hazards have struck, not reducing the losses from hazards or preventing them from evolving into disasters. … For humanitarian response, a study funded by DFID, the UK aid agency, evaluated $5.6 million-worth of preparedness investments in three countries– such as building an airstrip in Chad for $680,000 to save $5.2 million by not having to charter helicopters in the rainy season– and concluded that the overall portfolio of investments had an ROI of 2.1, with time savings in faster responses ranging from 2 to 50 days.\”
Instead of cobbling together disaster assistance on the fly each time a disaster happens, can global insurance markets be brought into play for low- and middle-income countries? After all, the global insurance industry covered catastrophe costs of $105 billion in 2012, mainly because of flooding in Thailand. But private insurance markets, even given their access to the reinsurance markets that currently end up covering 55%-65% of the costs of what private insurance pays in large natural disasters, don\’t seem to be large enough to handle the costs of natural disasters.
This line of thought leads Talbot and Barder to a discussion of catastrophe bonds, which they describe like this:
\”[T]he principle is simple: rather than transferring risk to a re-insurer, an insurance firm creates a single company (a “special purpose vehicle”, or SPV) whose sole purpose is to hold this risk. The SPV sells bonds to investors. The investors lose the face value of those bonds if the hazard specified in the bond contracts hits, but earn a stream of payments (the insurance premiums) until it does, or the bond’s term expires. This gives any actor – insurer, re-insurer, or sovereign risk pool like schemes in the Pacific, Caribbean and Sub-Saharan Africa, which we discuss below – a way to transfer risks from their balance sheets to investors.
\”Bermuda has been the centre of the index-linked securities market because it has laws that enable insurance firms to create easily independent “cells” housing the SPVs that underlie index-linked securities transactions (In 2014, 60% of outstanding index-linked contracts globally were domiciled there.) The combination of low yields in traditional assets like stocks and bonds (due to historically low interest rates) and the insurance features of index-linked securities have contributed to fast growth in the instrument. According to deal tracking of the catastrophe bond and index-linked security markets, demand is healthy, and global issuance has grown quickly. … London is another leading centre. … The UK is considering developing enabling legislation to boost the number of underlying holding companies or SPVs that are domiciled there, taking advantage a capacious insurance and reinsurance sector …\”
Here\’s a graph that shows the amount of catastrophe bonds issued in the last two decades. .
Again, those who buy a catastrophe bond hand over money, and receive an interest rate in return. If a catastrophe occurs, then the money is released. Investors like the idea because the interest rates on catastrophe bonds (which work a lot like insurance premiums) are often higher than what\’s currently out there in the market, and also that the timing of the risk of natural disasters occurring is not much correlated with other economic risks (which makes cat bonds a useful choice in building a diversified portfolio). Countries like having definite access to a pool of financial capital. Those who would be donating to disaster relief can instead help by subsidizing the purchase of these bonds.
There are obvious practical questions with catastrophe bonds, which are being slowly worked out over time. One issue is how to define in advance what counts as a catastrophe where money will be released. Talbot and Barder expain the choices this way:
Tying contracts to external, observable phenomena such as Richter-scale readings for the extent of earthquakes or median surface temperature for droughts means that risk transfer can be specifically tailored to the situation. There are three varieties of triggers: parametric, modelled-loss, and indemnity. Parametric triggers are the easiest to calculate based on natural science data– satellite data reporting a hurricane’s wind speed is transparent, publicly available, and cannot be affected by the actions of the insured or the insurer. When a variable exceeds an agreed threshold, the contract’s clauses to payout are invoked. Because neither the insured nor the insurer can affect the parameter, there is no cost of moral hazard, since the risks– the probabilities of bad events happening– cannot be changed. Modelled losses provide estimates of damage based on economic models. Indemnity coverage is based on the insurance claims and loss adjustment and are the most expensive to operate and take the most time to pay out (or not).
Several organizations are now operating to provide insurance against catastrophes different ways. There\’s the Pacific Catastrophe Risk Assessment and Financing Initiative, which covers Vanuatu, Tonga, Marshall Islands, the Cook Islands, and Samoa, and where what the countries pay is subsidized by World Bank and Japan. There\’s the Caribbean Catastrophe Risk Insurance Facility, covering 16 countries in the Carribbean. There\’s the African Risk Capacity, which is just starting out and has provided natural disaster coverage to only handful of countries, so far, including Niger, Senegal, Mauritania and Kenya.
These organizations are still a work in progress. As a general statement, it seems fair to say that these organizations have been more focused on how to assure quick payments, rather than on linking the amounts paid to taking preventive measures that can ameliorate the effects of future disasters. As an example of a success story, after Haiti\’s earthquake in 2010, apparently $8 million in disaster relief was available from the common insurance pool 19 just hours after the quake struck. It seems theoretically plausible that countries should be able to pay lower returns on their catastrophe bonds if they have taken certain steps to limit the costs of disasters, but negotiating the specifics is obviously tricky. There are also questions of how to spell out the \”trigger\” event for a catastrophe event involving pandemics, where a physical trigger like wind-speed or size of earthquake won\’t work.
Catastrophe bonds have their practical problems and limits. But they can play a useful role in planning ahead for natural disasters, which has a lot of advantages over reacting after they occur. For those interested in the economics of natural disasters, here are a couple of earlier posts on the subject:
When the US monthly unemployment rate topped out 10% back in October 2009, it was obvious that the labor market had a lot of \”slack\”–an economic term for underused resources. But the unemployment rate has been 5.5% or below since February 2015, and 5.0% or below since October 2015. At this point, how much labor market slack remains? The Congressional Budget Office offers some insights in its report, An Update to the Budget and Economic Outlook: 2016 to 2026 (August 23, 2016).
I\’ll offer a look at four measures of labor market slack mentioned by CBO: the \”employment shortfall,\” hourly labor compensation, rates at which worker are being hired or are quitting jobs, and hours worked per week. The bottom line is that a little slack remains in the US labor market, but not much.
From the CBO report: \”The employment shortfall, CBO’s primary measure of slack in the labor market, is the difference between actual employment and the agency’s estimate of potential (maximum sustainable) employment. Potential employment is what would exist if the unemployment rate equaled its natural rate—that is, the rate that arises from all sources except fluctuations in aggregate demand for goods and services—and if the labor force participation rate equaled its potential rate. Consequently, the employment shortfall has two components: an unemployment component and a participation component. The unemployment component is the difference between the number of jobless people seeking work at the current rate of unemployment and the number who would be jobless at the natural rate of unemployment. The participation component is the difference between the number of people in the current labor force and the number who would be in the labor force at the potential labor force participation rate. CBO estimates that the employment shortfall was about 1.4 million people in the second quarter of 2016; nearly the entire shortfall (about 1.3 million people) stemmed from a depressed labor force participation rate.\”
Here\’s a figure from the CBO measuring the employment shortfall in millions of workers. During the recession, the blue lines showing unemployment made up most of the employment shortfall. Now, it\’s pretty much all workers who would be expected to be working but are \”out of the labor force,\” but are not counted as unemployed because they have stopped looking for work.
To get a better sense of what\’s behind this figure, it\’s useful to see the overall patterns of the labor force participation rate (blue line in graph below) and the employment/population ratio (red line). The difference between the two is that the \”labor force\” as a concept includes both the employed and unemployed. Thus, you can see that the employment/population ratio veers away from the during periods of recession, and then the gap declines when the economy recovers and employment starts growing again. Looking at the blue line in the figure, notice that the labor force participation rate peaked around peaked around 2000, and has been declining since then. As I\’ve discussed here before, some of the reasons behind this pattern are that women were entering the (paid) workforce in substantial numbers from the 1970s through the 1990s, but that trend topped out around 2000. After that, various groups like young adults and low-skilled workers have seen their participation rates fall, and the aging of the US workforce tends to pull down labor force participation rates as well. Thus, the CBO is estimating what the overall trend of labor force participation should be, and saying that it hasn\’t yet rebounded back to the long-term trend. But you can also see, if you squint a bit, that the drop in labor force participation has leveled out a bit in the recent data. Also, the employment/population ration has been rising since about 2010.
A second measure of labor market slack looks at compensation for workers (including wages and benefits). The argument here is that when labor market slack is low, the This figure from the CBO report shows the change in compensation with actual data through the end of 2015, and projections after that. There does seem to be a little bump in hourly labor compensation toward the end of 2015 (see here for earlier discussion of this point), so as data for 2016 becomes available, the question will be whether that increase is sustained.
One more measure of labor market slack is the rate at which workers are being hired, which shows the liveliness of one part of the labor market, and the rate at which workers are quitting. The quit rate is revealing because when the economy is bad, workers are more likely to hang onto their existing jobs. Both hiring and quits have largely rebounded back to pre-recession levels, as shown by this figure from the August 2016 release of the Job Openings and Labor Turnover Survey conducted by the US Bureau of Labor Statistics
Finally, average hours worked per week is also a common measure of labor market slack. The CBO report notes that this measure has mostly rebounded back to its pre-recession level. Here\’s a figure from the US Bureau of Labor Statistics showing the pattern.
All economic news has a good news/bad news quality, and the fall in labor market slack is no exception. The good news is obvious: unemployment rates are down and wages are showing at lesat some early signs of rising. It wasn\’t obvious, back during the worst of the Great Recession in 2008-2009, how quickly or how much the unemployment rate would decline. As one example of the uncertainty, the Federal Reserve announced in December 2012, that “this exceptionally low range for the federal funds rate will be appropriate at least as long as the unemployment rate remains above 6½ percent,\” along with some other conditions, to reassure markets that its policy interest rate would remain low.But then the unemployment rate fell beneath 6.5% in April 2014, and the Fed decided it wasn\’t yet ready to start raising interest rates, so it retracted its policy from less than 18 months earlier.
The corresponding bad news is that whatever you dislike about the labor market can\’t really be blamed on the Great Recession any more. So if you\’re worried about issues like a lack of jobs for low-wage labor, too many jobs paying at or near the minimum wage, not enough on-the-job training, not enough opportunities for longer-term careers, loss of jobs in sectors like manufacturing and construction, too much part-time work, inequality of the wage distribution, one can no longer argue that the issues will be addressed naturally as the economy recovers. After all, labor market slack has now already declined to very low levels.
Thus, it was a big deal when Leontief wrote an essay for Scientific American in September 1982 arguing that new trends in mechanization and computing were displacing jobs. The title and subtitle give a good sense of his themes: \”The Distribution of Work and Income: When workers are displaced by machines, the economy can suffer from a loss of their purchasing power. Historically the problem has been eased by shortening the work week, a trend currently at a standstill.\” (The archives of Scientific American from these years are not readily available on-line, as far as I can tell, but many libraries will have back issues on their shelves.) That special issue of Scientific American issue contained seven other essays about how American jobs were being lost to the \”mechanization of work,\” with articles discussing how mechanization was reducing jobs in in a wide range of industries: manufacturing, design and coordination of manufacturing, agriculture, mining, commerce (including finance, transport, distribution, and communications), and information-based office work.
Of course, Leontief knew perfectly well that in the past, technology had been one of main drivers of disruptions that over time raised the average standard of living. hy would the effects of new technologies be different? In terms that seem very similar to the concerns raised by some current writers, Leontief wrote in 1982:
There are signs today, however, that past experience cannot serve as a reliable guide for the future of technological change. With the advent of solid-state electronics, machines that have been displacing human muscle from the production of goods are being succeeded by machines that take over the functions of the human nervous system not only in production but in the service industries as well … The relation between man and machine is being radically transformed. … Computers are now taking on the jobs of white-collar workers, performing first simple and then increasingly complex mental tasks. Human labor from time immemorial played the role of principal factor of production. There are reasons to believe human labor will not retain this status in the future.
Re-reading Leontief\’s 1982 essay today, with the benefit of hindsight, I find myself struck by how he sometimes hits, and then sometimes misses or sideswipes, what I would view as the main issues of how technology can lead to dislocation and inequality.
For example, Leontief expresses a concern that \”the U.S. economy has seen a chronic increase in unemployment from one oscillation of the business cycle to the next.\” Of course, he is writing in 1982, after the tumultuous economic movements of the 1970s. The US unemployment rate was above 10% from September 1982 (when his article was published) through June 1983. But since then, there have been multiple periods (late 1980s and early 1990s, the mid-1990s, and mid-2000s, and since February 2015), when the monthly unemployment rate has been 5.5% or lower. With the benefit of three decades of hindsight since Leontief\’s 1982 essay, the issue of technological disruption is not being manifested in a steadily higher unemployment rate, but instead is the dislocation for workers and the way in which technology contributes to inequality of wages.
If one presumes (for the sake of argument) a continued advance in technology that raises output, then the question is what form these gains will take. More leisure? If not more leisure, will the income gains be broadly or narrowly based?
Leontief emphasizes that one of the gains of technology in broad historic terms was a shorter work week. For example, he writes of how \”reduction of the average work week in manufacturing from 67 hours in 1870 to somewhat less than 42 hours\” by the mid-1940s, and points out that the work week did not continue to decline at the same pace after that. This notion that economic gains from technology will lead to a dramatically shorter work week is not new to Leontief: for example, John Maynard Keynes in his 1930 essay \”Economic Possibilities for Our Grandchildren\” (available a number of places on the web, like here and here) wrote about how technology was going to be so productive that we would move us toward a 15-hour work-week.
If technology doesn\’t just make the same things more cheaply, but also makes new goods and services that people desire, then the gains from technology may not lead to dramatically shorter work weeks. Very little in Leontief\’s essay discusses how technology can produce brand-new industries and jobs, and how these new industries provide consumers with goods and services that they value.
Concerning the issue of how technology can lead to greater inequality of incomes, Leontief offers some useful and thought-provoking metaphors. For example, here\’s his Adam and Eve comparison:
\”Adam and Eve enjoyed, before they were expelled from Paradise, a high standard of living without working. After their expulsion they and their successors were condemned to eke out a miserable existence, working from dawn to dusk. The history of technological progress over the past 200 years is essentially the story of the human species working its way slowly and steadily back into Paradise. What would happen, however, if we suddenly found ourselves in it? With all goods and services provided without work, no one would be gainfully employed. Being unemployed means receiving no wages. As a result until appropriate new income policies were formulated to fit the changed technological conditions everyone would starve in Paradise.\”
As noted earlier, the evidence since 1982 doesn\’t support a claim of steadily higher unemployment rates. But it does support a concern of increasing inequality, where those who find themselves in a position to benefit most from technology will tend to gain. One need not be worried about \”starving in Paradise\” to be worried that the economy could be a Paradise for those receiving a greater share of income, but not for those on the outside of Paradise looking in.
Leontief also offers an interesting image about what it means to be a worker who can draw on a larger pool of capital, using an example of an Iowa farmer. He writes:
What I have in mind is a complex of social and economic measures to supplement by transfer from other income shares the income received by blue- and white-collar workers from the sale of their services on the labor market. A striking example of an income transfer of this kind attained automatically without government intervention is there to be studied in the long-run effects of the mechanization of agriculture on the mode of operation and the income of, say, a prosperous Iowa farm.
Half a century ago the farmer and the members of his family worked from early morning until late at night assisted by a team of horses, possibly a tractor and a standard set of simple agricultural implements. Their income consisted of what essentially amounted to wages for a 75- or 80-hour work week, supplemented by a small profit on their modest investment. Today the farm is fully mechanized and even has some sophisticated electronic equipment. The average work week is much shorter, and from time to time the family can take a real vacation. Their total wage income, if one computes it at the going hourly rate for a much smaller number of manual-labor hours, is probably not much higher than it was 50 years ago and may even be lower. Their standard of living, however, is certainly much higher: the shrinkage of their wage income is more than fully offset by the income earned on their massive capital investment in the rapidly changing technology of agriculture.
The shift from the old income structure to the new one was smooth and practically painless. It involved no more than a simple bookkeeping transaction because now, as 50 years ago, both the wage income and the capital income are earned by the same family. The effect of technological progress on manufacturing and other nonagricultural sectors of the economy is essentially the same as it is on agriculture. So also should be its repercussions with respect to the shortening of the work day and the allocation of income.
Leontief here is eliding the fact that the share of American workers in agriculture was about 2-3% back in 1982, compared to 25-30% about 50 years earlier. He is discussing a smooth transfer of new technology for a single family, but with the rise in agricultural output for that family, something like 90% of their neighbors from 50 years earlier ended up transferring out of farming altogether. When Leontief and other modern writers talk about how modern technology is fundamentally more disruptive than earlier technology, I\’m not sure I agree. The shift of the US economy to mechanized agriculture was an extraordinarily disruptive change.
But Leontief also has his finger on a central issue here, which is that jobs which find ways to technology and investment as a complement are more likely to prosper. Along these lines, I\’m intrigued by the notion that when workers use web-based connectivity and applications, they are accessing a remarkable global capital infrastructure that complements their work–even though the Internet isn\’t physically visible in my side yard like a combine harvester.
A final Leontief metaphor might be called the \”horses don\’t vote\” issue. In a short article written at about this same time for a newsletter called Bottom Line Personal (April 30, 1983, 4:8, pp. 1+), Leontief wrote:
People cannot eat much more than they already do. They cannot wear much more clothing. But they certainly can use more services, and they begin to purchase more of them. This natural shift comes simultaneously with the technological changes. But in the long run, the role of labor diminishes even in service industries. Look at banking, where more and more is done electronically and automatically, and at secretarial areas, where staff work is being replaced by word processors.
The problem becomes: What happens to the displaced labor? In the last century, there was an analogous problem with horses. They became unnecessary with the advent of tractors, automobiles and trucks. And a farmer couldn\’t keep his horses and postpone the change to tractors by feeding them less oats. So he got rid of the horses and used the more productive tractor. After all, this doesn\’t precipitate a political problem, since horses don\’t vote. But it is more difficult to find a solution when you have the same problem with people. You do not need them as much as before. You can produce without them.
So the problem becomes the task of reevaluating the role of human labor in production as it becomes less important. It is a simple fact that fewer people will be needed, yet more goods and services can be produced. But the machinery and technology will not benefit everyone equally. We must ask: Who will get the benefit? How will the income be be distributed? We are accustomed to rewarding people for work based on market mechanisms, but we can no longer rely on the market mechanism to function so conveniently.
As noted earlier, when Leontief says that it\’s \”a simple fact\” that fewer people will be needed, I think he is overstating his case. Since 1982, the prediction of steadily rising unemployment rates has not come true. However, the prediction of steadily rising inequality of incomes and diminished opportunity for low-skilled labor has occurred.
The extent to which one views inequality as a problem isn\’t a matter of pure economics, but involves political and even moral or aesthetic judgments. The same can be said about preferred political solutions. Leontief, who did his early college studies at the University of Leningrad and his Ph.D. work at the University of Berlin, both in the 1920s, had a strong bias that more government planning was a necessary answer. His essay is heavily sprinkled with comments about how dealing with distributional issues will require \”close and systematic cooperation between management and labor carried on with government support,\” and with support for the German/Austrian economic policy model of the 1980s.
With Leontief\’s policy perspective in mind,,, I was intrigued to read this comment from his 1982 essay: \”In the long run, responding to the incipient threat of technological unemployment, public policy should aim at securing an equitable distribution of work and income, taking care not to obstruct technological progress even indirectly.\” My own sense is that if you take seriously the desire not to obstruct technological progress, even indirectly, then you need to allow for and even welcome the possibility of strong disruptions within the existing economy. In the world of US-style practical politics, that you must then harbor grave doubts about a Leontief-style strong nexus of government along with the management and labor of existing firms.
I agree with Leontief economic policy should seek to facilitate technological change and not to obstruct it, even indirectly. But rather than seeing this as a reason to support corporatist public policy, I would say that when technology is contributing to greater inequality of incomes, as it seems to be doing in recent decades, then address the inequality directly. Appropriate steps include taxes on those with higher incomes, direct subsidies to lower-income workers in ways that increase their wages, and indirect subsidies in the form of public spending on schools, retraining and job search; public transportation and public safety; and parks, libraries, and improvements in local living environments.
When it comes to rising levels of carbon and other greenhouse gases in the atmosphere, I\’m in favor of a consider-everything approach, including carbon capture and storage, geoengineering, noncarbon energy sources, energy conservation, and any other options that come to hand. But perhaps the most miraculous possibilities involve finding ways to absorb carbon dioxide from the air directly and then use it as part of a fuel source like methanol. This technology is not yet close to practical on any wide scale, but here are three examples of what\’s happening.
For example, researchers at Argonne National Laboratory and the University of Illinois Chicago have been working on what can be viewed as an \”artificial leaf\” for taking carbon dioxide out of the atmosphere. A press release from Argonne described it this way: \”To make carbon dioxide into something that could be a usable fuel, Curtiss and his colleagues needed to find a catalyst — a particular compound that could make carbon dioxide react more readily. When converting carbon dioxide from the atmosphere into a sugar, plants use an organic catalyst called an enzyme; the researchers used a metal compound called tungsten diselenide, which they fashioned into nanosized flakes to maximize the surface area and to expose its reactive edges. While plants use their catalysts to make sugar, the Argonne researchers used theirs to convert carbon dioxide to carbon monoxide. Although carbon monoxide is also a greenhouse gas, it is much more reactive than carbon dioxide and scientists already have ways of converting carbon monoxide into usable fuel, such as methanol.\” The research was just published in the July 29 issue of Science magazine, in \”Nanostructured transition metal dichalcogenide electrocatalysts for CO2 reduction in ionic liquid,\” by a long list of co-authors headed by Mohammad Asadi (vol. 353, issue 6298, pp. 467-470).
“A company in Iceland is already doing that: Carbon Recycling International,” Goeppert said. “There, they are recycling CO2 with hydrogen they obtain from water. They use geothermal energy, which is relatively cheap. They have been producing methanol that way for five years, exporting it to Europe, to use as a fuel. It’s still relatively small scale, but it’s a start.”
Methanol can easily be mixed into gasoline, as ethanol is today, or cars can be adapted fairly cheaply to run on 100% methanol. Diesel engines can run on methanol, too.
Of course, I don\’t know if carbon-dioxide-to-methanol can put a real dent into atmospheric carbon in any cost-effective way. But again, I\’m a consider-everything kind of guy. And before I get too skeptical about how fields of artificial leaves might work for this purpose, it\’s worth remembering that fields of solar collectors didn\’t look very practical as a method of generating electricity a couple of decades ago, either.
Would you expect that the number of US jobs in information technology fields is rising or falling over time? On one side, the growing importance of IT in so many areas of the US economy suggests that the job totals should be rising. On the other hand, one often reads warnings about how a combination of advances in technology and outsourcing to other countries are making certain jobs obsolete, and it seems plausible that a number of IT-related jobs could be either eliminated or outsourced to other countries by improved web-based software and more powerful and reliable computing capabilities. So which effect is bigger? Julia Beckhusen provides an overview of \”Occupations in Information Technology,\” published by the US Census Bureau (August 2016, American Community Survey Reports ACS-35).
The top line is that US jobs in IT seem to be roughly doubling in each decade since the 1970s. Here\’s an illustrative figure.
What exactly are these jobs? Here\’s a breakdown for 2014 The top five categories, which together make up about three-quarters of all the IT jobs, are softward developers, systems and applications software; computer support specialists; computer occupations, all other; computer and information systems managers; and computer systems analysts.
Are these IT jobs basically in the category of high-paying jobs for highly educated workers? Some are, some aren\’t. The proportion of workers in each of these IT job categories with a master\’s degree or higher is shown by the bar graphs on the left. The median pay for each job category is shown by the dot-graph on the right. Unsurprisingly, more than half of all those categorized as \”computer and information research scientists\” have a master\’s degree or higher; what perhaps surprising here is that almost half of those in this job category don\’t have this level of education. But in most of these IT job categories, only one-quarter and in many cases much less than one-quarter of those holding such an IT job have a master\’s degree. Indeed, I suspect that in many of the lower-paid IT job categories, many do not have a four-year college degree either–there are a lot of shorter-term programs to get some IT training. In general, IT jobs do typically pay more than the average US job. But the highest-paid IT jobs in the \”research scientists\” category also has the smallest number of workers (as shown in the graph above).
Finally, to what extend are these IT jobs held by those born in another country who have immigrated at least for a time to the United States? As the bars at the top of the figure show, 17% of all US jobs are held by foreign-born workers; among IT workers, it\’s 24%.
Beckhusen provides lots more detail in breaking down IT jobs along various dimensions. My own guess is that the applications for IT in the US economy will continue to be on the rise, probably in a dramatic fashion, and that many of those applications will turn out to be even more important for society than Twitter or Pokémon Go. The biggest gains in jobs won\’t be the computer science researchers, but instead will be the people installing, applying, updating, and using IT in a enormously wide range of contexts. If your talents and inclinations lead this way, it remains a good area to work on picking up some additional skills.