How Many High Wealth Individuals?

How many people in the world have a million dollars or more in financial assets? That is, leave aside the value of real estate or other owned property. Capgemini and RBC Wealth Management provide some estimates in a report that seeks to define the global market for the wealth management industry, the World Wealth Report 2013.

The report splits High Net Worth Individual (HNWI, natch) into three categories. Those with $1 million to $5 million in financial assets are in the \”millionaire next door\” category, and while I find that name a bit grating, it\’s fair enough. After all, a substantial number of of households in high-income countries that are near retirement, if they have been steadily saving throughout their working life, will have accumulated $1 million or more. The next step up is those with $5 million to $30 million in financial assets, who this report calls the \”mid-tier millionaires.\” At the top, with more than $30 million in financial assets are the \”ultra-HNWI\” individuals. Here\’s the global distribution:

A few quick observations:

1) The \”ultra-HNWI\” individuals are less than1% of the total HNWI population, but have  35% of the total assets of this group. The \”millionaires next door\”  with $1 million to $5 million are 90% of the high net worth individual population, and have 42.8% of the total net worth of this group.

2) Another table in the report shows that 3.4 million of the high-net worth individuals–about 28% of the total–are in the United States.  The next four countries for number of people in the high net worth category are Japan (1.9 million), Germany (1.0 million), and China (643,000) and the UK, (465,000).

3) World population is about 7 billion. So the 12 million or so high net worth individuals are about one-sixth of 1% of the world population.

High School Standards and Graduation Rates: The Tradeoff

One grim piece of news for the U.S. economy in the last few decades has been that the high school graduation rate flattened out around 1970. In a modern economy that depends on skills and brainpower, this is a troubling pattern. Richard J. Murnane considers the evidence and explanations in \”U.S. High School Graduation Rates: Patterns and Explanations,\” in the most recent issue of the Journal of Economic Literature (vol. 51:2, pp. 370–422). The JEL is not freely available on-line, although many in academia will have on-line access through their library or through a personal membership in the American Economic Association. Here\’s how Murnane sets the stage (footnotes and citations omitted for readability):

\”During the first seventy years of the twentieth century, the high school graduation rate of teenagers in the United States rose from 6 percent to 80 percent. A result of this remarkable trend was that, by the late 1960s, the U.S. high school graduation rate ranked first among countries in the Organisation for Economic Co-operation and Development (OECD). The increase in the proportion of the labor force that had graduated from high school was an important force that fueled economic growth and rising incomes during the twentieth century.

Between 1970 and 2000, the high school graduation rate in the United States stagnated. In contrast, the secondary school graduation rate in many other OECD countries increased markedly during this period. A consequence is that, in 2000, the high school graduation rate in the United States ranked thirteenth among nineteen OECD countries.

Until quite recently, it appeared that the stagnation of the U.S. high school graduation rate had continued into the twenty-first century. However, evidence from two independent sources shows that the graduation rate increased substantially between 2000 and 2010. This increase prevented the United States from losing further ground relative to other OECD countries in preparing a skilled workforce. But graduation rates in other OECD countries also increased during that decade. As a result, the U.S. high school graduation rate in 2010 was still below the OECD average.\”

Here\’s a figure showing the patterns. The horizontal axis shows (approximate) birth year. The vertical axis shows the  high school graduation rate for those who were 20-24 at the time. Thus, for example, the upward movement of the graph for those born around 1980 is based on data when those people had reached the age of 20-24.

Why the stagnation in high school graduation rates from 1970 up to about 2000? Clearly, it\’s not because the labor market rewards to getting a high school degree had declined; in fact, the gains from a high school degree had increased. Research has looked at explanations especially relevant at certain places and time, like a boom in demand for Appalachian coal in the 1970s that might have made a high school degree look less value to lower-skilled labor in that area, or how the crack epidemic of the late 1980s and early 1990s altered the expected rewards to finishing high school for a group of young men in certain inner cities, or how the end of certain court-ordered desegregation plans led to higher dropout rates for some at-risk youth.

While all of these explanations have an effect in certain times and places, Murnane suggests a bigger cause: a pattern of increasing high school graduation requirements that started in the 1970s. He writes: \”In summary, my interpretation of the evidence is that increases in high school graduation requirements during the last quarter of the twentieth century increased the nonmonetary cost of earning a diploma for students entering high school with weak skills. By so doing, they counteracted the increased financial payoff to a diploma and contributed to the stagnation in graduation
rates over the last decades of the twentieth century.\”

Why have high school graduation rates apparently risen in the last 10 years or so? Murnane offers some fragmentary evidence that better-prepared ninth graders, expanded preschool programs, and reductions in teen pregnancy may have played a role. But his conclusion is modest: \”In summary, there are many hypotheses for why the high school graduation rate of 20–24-year-olds in 2010 is higher than it was in 2000, and why the increase in the graduation rate was particularly large for blacks and Hispanics. However, to date, there is no compelling evidence to explain this encouraging
recent trend.\”

Murnane offers the useful reminder that voting to raise graduation standards is easy, but raising the quality of education so that a rising share of students can meet those standards is hard. \”An assumption implicit in state education policies is that the quality of schooling will improve sufficiently to enable high school graduation rates to rise even as graduation requirements are stiffened. Indeed, many states increased public expenditures on public education to facilitate this improvement. However, it has proven much more difficult to improve school quality than to
legislate increases in graduation requirements.\”

My own concern about high school graduation requirements is that they are too often focused on getting a student into a college, any college, rather than moving the student toward a career. A high school student in the 25th percentile of a class should still be able to graduate from high school. But while some students who performed poorly in high school will shine in college–and should have an opportunity to do so–it is an unforgiving fact that many students at the bottom of the high school performance distribution will have little interest or aptitude in signing up for more schooling.

In the Spring 2013 issue of the Journal of Economic Perspectives, Julie Berry Cullen, Steven D. Levitt, Erin Robertson, and Sally Sadoff tackle this question: \”What Can Be Done To Improve Struggling High Schools?\” They point out that the overall high school graduation rates do not show the depth of the problem in a number of inner-city school districts. They conclude: \”In spite of decades of well-intentioned efforts targeted at struggling high schools, outcomes today are little improved. A handful of innovative programs have achieved great success on a small scale, but more generally, the economic futures of the students at the bottom of the human capital distribution remain dismal. In our view, expanding access to educational options that focus on life skills and work experience, as opposed to a focus on traditional definitions of academic success, represents the most cost-effective, broadly implementable source of improvements for this group.\” (Full disclosure: I\’ve been the Managing Editor of the Journal of Economic Perspectives for the past 27 years, so I am predisposed to find all of the articles intriguing. All JEP articles back to the first issue in 1987 are freely available on-line courtesy of the American Economic Association.)

Setting a Carbon Price: What\’s Known, What\’s Not

A number of scientists believe that rising levels of carbon dioxide are likely to lead to climate change. Maybe they are incorrect! But prudence suggests that when enough warning sirens are going off, you should at least start looking at options. In that spirit, I found it useful to consider Robert S. Pindyck essay on \”Pricing Carbon When We Don’t Know the Right Price,\” in the Summer 2013 issue of Regulation magazine. The issue also includes four other articles on carbon tax issues. Pindyck sets the stage in this way:

\”There is almost no disagreement among economists that the true cost to society of burning a ton of carbon is greater than its private cost. … This external cost is referred to as the social cost of carbon (SCC) and is the basis for the idea of imposing a tax on carbon emissions or adopting a similar policy such as a cap-and-trade system. However, agreeing that the SCC is greater than zero isn’t really agreeing on very much. Some would argue that any increases in global temperatures will be moderate, will occur in the far distant future, and will have only a small impact on the economies of most countries. If that’s all true, it would imply that the SCC is small, perhaps only around $10 per ton of CO2, which would justify a very small (almost negligible) tax on carbon emissions, e.g., something like 10 cents per gallon of gasoline. Others would argue that without an immediate and stringent GHG abatement policy, there is a reasonable possibility that substantial temperature increases will occur and might have a catastrophic effect. That would suggest the SCC is large, perhaps $100 or $200 per ton of CO2, which would imply a substantial tax on carbon, e.g., as much as $2 per gallon of gas. So who is right, and why is there such wide disagreement?\”

Pindyck acknowledges the uncertainty over how the atmospheric science of climate change, but as befits an economist, his main focus is on the economic issues. He points to the often cited study by Michael Greenstone, Elizabeth Kopits and Ann Wolverton, who published a 2011 paper on \”Estimating the Social Cost of Carbon for Use in U.S. Federal Rulemakings: A Summary and Interpretation.\” They estimated a \”central value\” for the social cost of carbon of $21 per ton of carbon dioxide emissions. But as Pindyck points out, this central value is of uncertain value for three reasons. First, the link from climate change to an effect on economic output \” is completely ad hoc and of almost no predictive value. The typical IAM [\”integrated assessment model\”] has a loss function that relates temperature increases to reductions in GDP. But there is no economic theory behind the loss function; it is simply made up. Nor are there data on which to base the parameters of the function; instead the parameters are simply chosen to yield moderate losses that seem “reasonable” (e.g., 1 or 2 percent of GDP) from moderate temperature increases (e.g., 2° or 3°C). Furthermore, once we consider larger increases in temperatures (e.g., 5°C or higher), determining the economic loss becomes pure guesswork. One can plug high temperatures into IAM loss functions, but the results are just extrapolations with no empirical or theoretical grounding.\”

A second problem is that the \”central value\” doesn\’t reveal anything about the potential risk of catastrophe–and by the time one has combined the uncertainties of how well climate science can predict catastrophic weather changes that are 50 or 100 years away, combined with uncertainties over the economic costs of those weather changes, this problem is severe.

The third problem is choosing a \”discount rate\”–that is, how should we best compare the costs of acting in the near-term to reduce carbon emissions with the benefits that would be received in 50 or 100 years? Presumably, a substantial share of the benefits will go to people who do not yet exist, and who, presuming that economic growth continues over time, will on average have considerably higher incomes than we do today.  Placing a high value on those future benefits means that we should be willing to sacrifice a great deal in the present; placing a lower value on those future benefits means a smaller willingness to incur costs in the present. But deciding how much to discount the future is an unsettled question in both economic and philosophy.

Pindyck\’s policy proposal is to set a low carbon tax now. He argues: \”Because it is essential to establish that there is a social cost of carbon, and that social cost must be internalized in the prices that consumers and firms actually see and pay. Later, as we learn more about the true size of
the SCC, the carbon tax can be increased or decreased accordingly.\” My own views on this subject favor a \”Drill-Baby Carbon Tax.\”

But I\’d take a moment here to note that the temptation to argue based on the low-probability chance of catastrophe needs to be handled with care. After all, there are lots of possible sequences of events that are low-probability, but potentially catastrophic. Those who want to limit use of fossil fuels call up  certain climate change scenarios. Those who are anti-science point to the possibility that scientists working with genetics or nanotechnology over the next century will create a doomsday plague. Those who favor huge spending on defense and espionage point to the possibility that a rogue government or a group of terrorists will be able to arm themselves with weapons of mass destruction. Those who favor aggressive space exploration talk about the possibility of the earth suffering a devastating strike from an asteroid in the next century or two. Write your own additional political, economic, and science fiction disaster scenarios here! My point is that being able to name a catastrophe with a low but unquantifiable probability is a fairly cheap tool of argumentation. 

The Punch Bowl Speech: William McChesney Martin

In monetary policy jargon, \”taking away the punch bowl\” refers to a central bank action to reduce the stimulus that it has been giving the economy. 

Thus,  last Wednesday, Ben Bernanke discussed the possibility that if the U.S. economy performs well, the Federal Reserve would reduce and eventually stop its \”quantitative easing\” policy of buying U.S. Treasury bonds and various mortgage-backed securities. Everyone knows this needs to happen sooner or later, but Bernanke\’s comments raised the possibility that it might be sooner rather than later, and at least for a few days, stock markets dropped and broader financial markets were shaken.
Various blog commentaries and press reports referred to Bernanke\’s action as taking away the \”punch bowl\” (for example, here, here, and here). 

The \”punch bowl\” metaphor seems to trace back to a speech given on October 19, 1955, by William McChesney Martin, who served as Chairman of the Federal Reserve from 1951 through 1970, to the  New York Group of the Investment Bankers Association of America. Here\’s what Martin said to the financiers of his own time, who presumably weren\’t that eager to see the Fed reduce its stimulus, either:

\”If we fail to apply the brakes sufficiently and in time, of course, we shall go over the cliff. If businessmen, bankers, your contemporaries in the business and financial world, stay on the sidelines, concerned only with making profits, letting the Government bear all of the responsibility and the burden of guidance of the economy, we shall surely fail. … In the field of monetary and credit policy, precautionary action to prevent inflationary excesses is bound to have some onerous effects–if it did not it would be ineffective and futile. Those who have the task of making such policy donl t expect you to applaud. The Federal Reserve, as one writer put it, after the recent increase in the discount rate, is in the position of the chaperone who has ordered the punch bowl removed just
when the party was really warming up.\”

Monetary policy in the 1950s got a lot less attention than it does today: indeed, there was a significant group of economists who believed that it was completely ineffectual. The old story told by Herb Stein in his 1969 book, The Fiscal Revolution in America, was that President John F. Kennedy used to remember what Martin did by \”the fact that William McChesney Martin was head of the Federal Reserve, and that \”Martin\” started with an \”M\”, as did \”monetary,\”  so he knew that monetary policy was what the Federal Reserve did.  (Apparently he was not bothered by the fact that \”fiscal\” and \”Federal Reserve\” both start with an \”f\”.)\”

But Martin viewed monetary policy very much as a balancing act. As he once said in testimony before the U.S. Senate: “Our purpose is to lean against the winds of deflation or inflation, whichever way they are blowing.” (In  the Winter 2004 issue of the Journal of Economic Perspectives, where I\’ve been managing editor since 1986,  Christina Romer and David Romer wrote \”Choosing the Federal Reserve Chair: Lessons from History,\” which puts Martin\’s views on monetary policy in the context of other pre-Bernanke Fed chairmen.)

Martin held the view that monetary policy could be useful in reducing the risk of depressions and inflations, but that it wasn\’t all-powerful. In the 1955 speech, he said:

\”But a note should be made here that, while money policy can do a great deal, it is by no means all powerful. In other words, we  should not place too heavy a burden on monetary policy. It must be accompanied by appropriate fiscal and budgetary measures if we are to achieve our aim of stable progress. If we ask too much of monetary policy we will not only fail but we will also discredit this useful, and indeed indispensable, tool for shaping our economic development. …

\”Nowadays, there is perhaps a tendency to exaggerate the effectiveness of monetary policy in both directions. Recently, opinion has been voiced that the country\’ s main danger comes from a roseate belief that monetary policy, backed by flexible tax and debt management policies and aided by a host of built-in stabilizers, has completely conquered the problem of major economic fluctuations and relegated them to ancient history. This, of course, is not so because we are dealing with human
beings and human nature.

\”While the pendulum swings between too little or too much reliance upon credit and monetary policy, there is an emerging realization more  and more widely held and expressed by business, labor and farm organizations that ruinous depressions are not inevitable, that something can be done about moderating excessive swings of the business cycle. The idea that the business cycle can be altogether abolished seems to me as fanciful as the notion that the law of supply and demand can be repealed. It is hardly necessary to go that far in order to approach the problems of healthy economic growth sensibly and constructively. Laissez faire concepts, the idea that deep depressions are divinely guided retribution for man\’s economic follies, the idea that money should be the master instead of the servant, have been discarded because they are no longer valid, if they ever were.\”

It seems to me that at least some of the current discussion of the Fed has a similar tone to what Martin is describing of exaggeration in both directions. Some critics argue that the extraordinary monetary policies undertaken since the later part of 2007 are useless. On the other extreme, other critics argue that if only those extraordinary policies had been pursued with considerably more vigor, the U.S. economy would already have returned to full employment. In other words, the Fed is either ineffectual or all-powerful–but the truth is likely to exist between these extremes.

My own sense, as I\’ve argued on this blog more than once is that that extraordinary monetary policy steps taken by the Fed made sense in the context of the extraordinary financial crisis and Great Recession from 2007-2009, and even for a year or two or three afterward. But the Great Recession ended four years ago in June 2009. The extreme stimulus policies of the Fed–ultra-low interest rates and direct buying of financial securities–don\’t seem to pose any particular danger of inflation as yet, but they create other dislocations: savers suffer, and some will go on a \”search for yield\” that can create new asset market bubbles; money market funds are shaken; and banks and governments that can borrow cheaply are less likely to carry out needed reforms.  And of course, there is the problem of economic and financial problems that arise when the Fed does take away the punch bowl. For discussion of these concerns, see earlier blog posts here, here, here and here.

My own sense is that there are times for monetary policy to tighten and times for it to loosen, and the very difficult practical wisdom lies in knowing the difference. In a similar spirit, Martin started his 1955 speech this way: \”There\’s an apocryphal story about a professor of economics that sums up in a way the theme of what I would like to talk about this evening. In final examinations the professor always posed the same questions. When he was asked how his students could possibly fail the test, he replied simply, \’\’Well, it\’s true that the questions don\’t change, but the answers do.\”\”

Technology and Job Destruction

Is there something about the latest wave of information and communication technologies that is especially destructive to jobs? David Rotman offers an overview of the arguments in \”How Technology Is Destroying Jobs,\” in the July/August 2013 issue of the MIT Technology Review.

On one side Rotman emphasizes the work of Erik Brynjolfsson and Andrew McAfee: \”That robots, automation, and software can replace people might seem obvious to anyone who’s worked in automotive manufacturing or as a travel agent. But Brynjolfsson and McAfee’s claim is more troubling and controversial. They believe that rapid technological change has been destroying jobs faster than it is creating them, contributing to the stagnation of median income and the growth of inequality in the United States. And, they suspect, something similar is happening in other technologically advanced countries.\”

As one piece of evidence, they offer this graph showing productivity growth and private-sector employment growth. going back to 1947, these two grew at more-or-less the same speed. But starting around 2000, a gap opens up with productivity growing faster than private sector employment.

The figure sent me over to the U.S. Bureau of Labor Statistics website to look at total jobs. Total U.S. jobs were 132.6 million in December 2000. Then there\’s a drop associated with the recession of 2001, a rise associated with the housing and finance bubble, a drop associated with the Great Recession, and more recently a bounceback to 135.6  million jobs in May 2013. But put it all together, and from December 2000 to May 2013, total U.S jobs now are about 2.2% higher than they were back at the start of the century.

Why the change? The arguments rooted in technological developments sound like this: \”Technologies like the Web, artificial intelligence, big data, and improved analytics—all made possible by the ever increasing availability of cheap computing power and storage capacity—are automating many routine tasks. Countless traditional white-collar jobs, such as many in the post office and in customer service, have disappeared. W. Brian Arthur, a visiting researcher at the Xerox Palo Alto Research Center’s intelligence systems lab and a former economics professor at Stanford University, calls it the “autonomous economy.” It’s far more subtle than the idea of robots and automation doing human jobs, he says: it involves “digital processes talking to other digital processes and creating new processes,” enabling us to do many things with fewer people and making yet other human jobs obsolete.\”

Of course, there are other arguments about slower job growth rooted in other factors. Looking at the year 2000 as a starting point is not a fair comparison, because the U.S. economy was at the time in the midst of the unsustainable dot-com bubble. The current economy is still recovering from its worst episode since the Great Depression.  In addition, earlier decades have seem demographic changes like a flood of baby boomers entering the workforce from the 1960s through the 1980s, along with a flood of women entering the (paid) workforce. As those trends eased off, the total number of jobs would be expected to grow more slowly.

Another response to the technology-is-killing-jobs argument is that while technology has long been disruptive, the economy has shown an historical pattern of adjusting over time. Rotman writes: \”At least since the Industrial Revolution began in the 1700s, improvements in technology have changed the nature of work and destroyed some types of jobs in the process. In 1900, 41 percent of Americans worked in agriculture; by 2000, it was only 2 percent. Likewise, the proportion of Americans employed in manufacturing has dropped from 30 percent in the post–World War II years to around 10 percent today—partly because of increasing automation, especially during the 1980s. … Even if today’s digital technologies are holding down job creation, history suggests that it is most likely a temporary, albeit painful, shock; as workers adjust their skills and entrepreneurs create opportunities based on the new technologies, the number of jobs will rebound. That, at least, has always been the pattern. The question, then, is whether today’s computing technologies will be different, creating long-term involuntary unemployment.\”

Given that the U.S. and other high-income economies have been experiencing technological change for well over a century, and the U.S. unemployment rate was below 6% as recently ago as the four straight years from 2004-2007, it seems premature to me to be forecasting that technology is now about to bring a dearth of jobs. Maybe this fear will turn out to be right this time, but it flies in the face of of a couple of centuries of economic history.

However, it does seem plausible to me that technological development in tandem with globalization are altering pay levels in the labor force, contributing to higher pay at the top of the income distribution and lower pay in the middle. For some discussion of technology and income inequality, see my post earlier this week on \”Rock Music, Technology, and the Top 1%,\” and for some discussion of technology and \”hollowing out\” the middle skill levels of the labor force, see my post on \”Job Polarization by Skill Level\” or this April 2010 paper by David Autor  (Full disclosure: Autor is also editor of the Journal of Economic Perspectives, and thus is my boss.)

Given that new technological developments can be quite disruptive for existing workers, the conclusion I draw is the importance of finding ways for more workers to find ways to work with computers and robots in ways that can magnify their productivity. Rotman mentions a previous example of such a social transition: \”Harvard’s [Larry] Katz has shown that the United States prospered in the early 1900s in part because secondary education became accessible to many people at a time when employment in agriculture was drying up. The result, at least through the 1980s, was an increase in educated workers who found jobs in the industrial sectors, boosting incomes and reducing inequality. Katz’s lesson: painful long-term consequences for the labor force do not follow inevitably from technological changes.\” It feels to me as if we need a widespread national effort in both the private and the public sector to figure out ways in which every worker in every job can use information technology to become more productive.

The arguments over how technology affects jobs remind me a bit of an old story from the development economics literature. An economist is visiting a public works project in a developing country. The project involves building a dam, and dozens of workers are shoveling dirt and carrying it over to the dam. The economist watches for awhile, and then turns to the project manager and says: \”With all these workers using shovels, this project is going to take forever, and it\’s not going to be very high quality. Why not get a few bulldozers in here?\” The project manager responds: \”I can tell that you are unfamiliar with the political economy of a project like this one. Sure, we want to build the dam eventually, but really, one of the main purposes of this project is to provide jobs. Getting a bulldozer would wipe out these jobs.\” The economist mulls this answer a bit, and then replies: \”Well, if the real emphasis here is on creating jobs, why give the workers shovels? Wouldn\’t it create even more jobs if they used spoons to move the dirt?\”

The notion that everyone could stay employed if only those new technologies would stay out of the way has a long history. But the rest of the world is not going to back off on using new technologies. And future U.S. prosperity won\’t be built by workers using the metaphorical equivalent of spoons, rather than bulldozers.

Macroprudential Monetary Policy: What It Is, How it Works

In the old days, like six or seven years ago, one could teach monetary policy at the intro level as consisting of basically one tool: the central bank would lower a particular target interest rates to stimulate the economy out of recessions, and raise that target interest rate when an economy seemed to be overheating. But after the last few years,  even at the intro level, one needs to teach about some additional tools available to monetary authorities. One set of tools goes under the name of \”macroprudential policy.\”

The idea here is that in the past, regulation of financial institutions focused on whether individual companies were making reasonably prudent decisions. A major difficulty with this \”microprudential\” approach to regulation, as the Great Recession showed, is that it didn\’t take into account whether the decisions of many financial firms all at once were creating macroeconomic risk. In particular, when the central bank was looking at whether the economy was in sinking into recession or on the verge of inflation, it didn\’t take into account whether the overall level of credit being extended in the economy was growing very rapidly–like in the housing price bubble from about 2004-2007. I discussed some of the evidence on how boom-and-bust credit cycles are often linked to severe recessions in a March 2012 post on \”Leverage and the Business Cycle\”  as well as in a February 2013 post on \”The Financial Cycle: Theory and Implications.\”

Macroprudential policy means using regulations to limit boom-and-bust swings of credit. Douglas J. Elliott, Greg Feldberg, and  Andreas Lehnert offer a useful listing of these kinds of policies, how they have been used in the past, and some preliminary evidence on how they have worked in \”The History of Cyclical Macroprudential Policy in the United States,\” written as a working paper in the Finance and Economics Discussion Series published by the Federal Reserve.

One basic but quite useful contribution of the paper is to organize a list of macroprudential policy tools. One set of tools can be used to affect demand for credit, like rules about loan-to-value ratios for those borrowing to buy houses, margin requirements for those buying stocks, the acceptable length of loans for buying houses, and tax policies like the extent to which interest payments can be deductible for tax purposes.  Another set of tools affects the supply of credit, like rules about the interest rates that financial institutions can pay on certain accounts, or the interest rates that they can charge for certain loans, along with rules about how much financial institutions must set aside in reserves or have available as capital, any restrictions on the portfolios that financial institutions can hold, and the aggressiveness of the regulators in enforcing these rules. Here\’s a list of macroprudential tools, with some examples of their past use.

One interesting aspect of these macroprudential policy tools is that many of them are sector-specific. When the central bank thinks of monetary policy as just moving overall interest rates, it constantly faces a dilemma. Is it worth raising interest rates for the entire economy just because there might be a housing bubble? Or just because the stock market seems to be experiencing \”irrational exuberance\” as in the late 1990s? Macroprudential policy suggests that one might address a housing market credit boom by altering regulations focused on housing markets, or one might address a stock market bubble by altering margin requirements for buying stock.

 Do these macroprudential tools work? Elliott, Feldberg, and Lehnert offer some cautious evidence on this point: \”In this paper, we use the term “macroprudential tools” to refer to cyclical macroprudential tools aimed at slowing or accelerating credit growth. … Many of these tools appear to have succeeded in their short-term goals; for example, limiting specific types of bank credit or liability and impacting terms of lending. It is less obvious that they have improved long-term financial stability or, in particular, successfully managed an asset price bubble, and this is fertile ground for future research. Meanwhile, these tools have faced substantial administrative complexities, uneven political
support, and competition from nonbank or other providers of credit outside the set of regulated institutions.  … Our results to date suggest that macroprudential policies designed to tighten credit
availability do have a notable effect, especially for tools such as underwriting standards, while
macroprudential policies designed to ease credit availability have little effect on debt outstanding.\”

When the next asset-price bubble or credit boom emerges–and sooner or later, it will–macroprudential tools and how best to use them will become a main focus of public policy discussion.

For more background on the economic analysis behind macroprudential policy, a useful starting point is \”A Macroprudential Approach to Financial Regulation,\” by Samuel G. Hansen, Anil K. Kashyap, and Jeremy C. Stein, which appeared in the Winter 2011 issue of the Journal of Economic Perspectives.  (Full disclosure: My job as Managing Editor of JEP has been paying the household bills since 1986.) Jeremy Stein is now a member of the Federal Reserve Board of Governors, so his thought on the subject are of even greater interest.

Global Energy Snapshots

Here are some patterns in world energy markets that caught my eye, taken from the just-released BP Statistical Review of World Energy 2013.

For starters, here\’s the long-run pattern of world oil prices. The top line is the relevant one, because the prices are adjusted to 2012 dollars. To me, the striking pattern is that real oil prices are at their all-time high since the Pennsylvania oil boom of the 1860s added enough supply to bring real oil prices down. The severity of the price shocks of the mid- and late 1970s stand out here. but the increase in oil prices since about 2000 is also striking. Even in a U.S. economy that relies more on services and information than on old-style heavy manufacturing, this price increase must be contributing to the economic sluggishness of the last few years. 

What about natural gas prices? The time series here doesn\’t go back so far: only to  1995. What\’s interesting to me here is that natural gas prices around the world move more-or-less in harmony up to about 2008. But since then, natural gas prices in the U.S. have dropped and stayed down, while those in Germany, the UK, and Japan dropped in the recession but have since increased. Natural gas is not (yet?) a unified world market, because it cannot be cheaply transported in volume around the world.  Thus, the recent increases in unconventional natural gas production in North America have brought down prices here, but not in the rest of the world.

What about coal? Here I\’ll focus on quantities, not prices. As the report notes: \”Coal remained the fastest-growing fossil fuel, with China consuming half of the world’s coal for the first
time – but it was also the fossil fuel that saw the weakest growth relative to its historical average. …
 Global coal production grew by 2%. The Asia Pacific region accounted for all of the net increase, offsetting a large decline in the US. The Asia Pacific region now accounts for more than two-thirds of global output. Coal consumption increased by a below-average 2.5%. The Asia Pacific region was also responsible for all of the net growth in global consumption. A second consecutive large decline in North America (-11.3%) more than offset growth in other regions; EU consumption grew for a third consecutive year.\” It appears that North America is finding ways to substitute natural gas for coal on the margin, which is clearly a \”win\” for the environment.

Finally, here\’s an image of overall global energy consumption since 1987.

As the report summarizes: \”World primary energy consumption grew by a below-average 1.8% in 2012. Growth was below average in all regions except Africa. Oil remains the world’s leading fuel, accounting for 33.1% of global energy consumption, but this figure is the lowest share on record and oil has lost market share for 13 years in a row. Hydroelectric output and other renewables in power generation both reached record shares of global primary energy consumption (6.7% and 1.9%, respectively).\” I would also note that while percentage gains in renewable energy sources can appear large from their very small starting point, they remain a tiny part of overall world energy consumption. 

Rock Music, Technology, and the Top 1%

I\’m always on the lookout for real-world applications about how technology is altering the distribution of income. Applications that have intuitive appeal for students are even better! Thus, I enjoyed on several levels Alan Krueger\’s recent talk at the Rock and Roll Hall of Fame, \”Rock and Roll, Economics, and Rebuilding the Middle Class.\” Krueger uses the music industry as a microcosm for technological trends that have led to greater inequality in recent decades. I\’ll start here with some facts and exhibits.

Prices for concert tickets have been rising quickly. Since the early 1980s, overall price inflation is up about 150%, but the price of concert tickets is up about 400%.

 The share of concert revenue received by the top 1% of performers has more than doubled in the last 30 years or so. 

How has technology contributed to this change? Krueger explains: 

\”Technological changes through the centuries have long made the music industry a super star industry. Advances over time including amplification, radio, records, 8-tracks, music videos, CDs, iPods, etc., have made it possible for the best performers to reach an ever wider audience with high fidelity. And the increasing globalization of the world economy has vastly increased the reach and notoriety of the most popular performers. They literally can be heard on a worldwide stage. But advances in technology have also had an unexpected effect. Recorded music has become cheap to replicate and distribute, and it is difficult to police unauthorized reproductions. This has cut into the revenue stream of the best performers, and caused them to raise their prices for live performances. My research suggests that this is the primary reason why concert prices have risen so much since the late 1990s. In this spirit, David Bowie once predicted that “music itself is going to become like running water or electricity,” and, that as a result, artists should “be prepared for doing a lot of touring because that’s really the only unique situation that’s going to be left.” While concerts used to be a loss leader to sell albums, today concerts are a profit center.\”

Krueger also points out that which bands become popular is to some extent a matter of luck, and once a band has become popular, that popularity can then be to some extent self-sustaining.

Much of the rest of the talk is given over to applying these lessons to the broader economic picture. Technology has altered many industries so that some ways of earning money have been decimated, while others have been encouraged. The share of income going to the top 1% for the U.S. economy as a whole has been rising. In many cases, who ends up in this top 1% has an element of luck in the sense that very large economic returns are a matter of fortunate timing, not just skill. Those who run the first company to invent a certain product may end up very rich, while those who were just a few weeks or months behind end up with much less. Certainly the current top executives of big companies are lucky in the sense that instead of earning 25 times as much as the pay of an average worker, as CEOs did back in the 1970s, the timing of their career now allows them earn 200 times the pay of an average worker.

Krueger is not a newcomer to the economics of rock \’n roll. For one of his earlier efforts, see \”Rockonomics: The Economics of Popular Music,” by Marie Connolly and Alan B. Krueger. They quote  the  well-known noneconomist Paul Simon: “The fact of the matter is that popular music is one of the industries of the country. It’s allcompletely tied up with capitalism. It’s stupid to separate it.” It\’s available as a 2005 working paper from the Princeton UJniversity Industrial Relations Section.

Full disclosure: Alan Krueger was editor of the Journal of Economic Perspectives, and thus was my boss, from 1996-2002. 

Flexibility and Neoclassical Economics

A common complaint from some of those learning economics, and from some economists themselves, is that the formal study of economics is straitjacket that limits analysis and constrains policy conclusions–in particular that it leads to an overappreciation of market forces and an underappreciation of the usefulness of government interventions. This belief seems misguided to me. John Maynard Keynes, who said so many things so well, once wrote (in his introduction to Cambridge Economic Handbooks: \”[Economics] is a method rather than a doctrine, an apparatus of the mind, a technique of thinking which helps its possessor to draw correct conclusions.\”

Dani Rodrik spells out very nicely how neoclassical economics has proved no hindrance to his work that has often questioning what was at the time mainstream economic wisdom in an interview published in the April 2013 issue of the World Economics Association Newsletter. Here are a few of Rodrik\’s comments. 

On the usefulness of the economics toolkit

\”I have never thought of neoclassical economics as a hindrance to an understanding of social and economic problems. To the contrary, I think there are certain habits of mind that come with thinking about the world in mainstream economic terms that are quite useful: you need to state your ideas clearly, you need to ensure they are internally consistent, with clear assumptions and causal links, and you need to be rigorous in your use of empirical evidence. Now, this does not mean that neoclassical economics has all the answers or that it is all we need. Too often, people who work with mainstream economic tools lack the ambition to ask broad questions and the imagination to go outside the box they are used to working in. But that is true of all “normal science.” Truly great economists use neoclassical methods for leverage, to reach new heights of understanding, not to dumb down our understanding. Economists such as George Akerlof, Paul Krugman, and Joe Stiglitz are some of the names that come to mind who exemplify this tradition. Each of them has questioned conventional wisdom, but from within rather than from outside. …

\”The criticism of methodological uniformity in Economics can also be taken too far. Surely, the use of mathematical and statistical techniques is not a problem per se. Such techniques simply ensure our arguments are conceptually and empirically coherent. Yes, excessive focus on these techniques, or the use of math just for its own sake, are a problem–but a problem against which there is already a counter-movement from within. In the top journals of the profession, I would say most math-heavy papers are driven by substantive questions rather than methods-driven concerns. \”

On the level of policy disagreement that exists among those using similar mainstream methods

\”Pluralism on policy is already a reality, even within the boundaries of the existing methods, as I indicated. There are healthy debates in the profession today on the minimum wage, fiscal policy, financial regulation, and many other areas too. I think many critics of the economics profession overlook these differences, or view them as the exception rather than the rule. And there are certainly some areas, for example international trade,where economists’ views are much less diverse than public opinion in general. But economics today is not a discipline that is characterized by a whole lot of unanimity.\”

 On the critique that many economists are narrow in their outlook.
 

\”There are powerful forces having to do with the sociology of the profession and the socialization process that tend to push economists to think alike. Most economists start graduate school not having spent much time thinking about social problems or having studied much else besides math and economics. The incentive and hierarchy systems tend to reward those with the technical skills rather than interesting questions or research agendas. An in-group versus out-group mentality develops rather early on that pits economists against other social scientists. All economists tend to imbue a set of values that tends to glorify the market and demonize public action. What probably stands out with mainstream economists is their awe of the power of markets and their belief that the market logic will eventually vanquish whatever obstacle is placed on its path.\”
 Here\’s one example of Rodrik using standard economic analysis as a tool for challenging conventional wisdom–in this case, the conventional wisdom that the benefits of globalization clearly outweigh the redistributive effects.

\”Take for example the relationship between the gains from trade and the distributive implications of trade. To this day, there is a tendency in the profession to overstate the first while minimizing the second. This makes globalization look a lot better: it’s all net gains and very little distributional costs. Yet look at the basic models of trade theory and comparative advantage we teach in the classroom and you can see that the net gains and themagnitudes of redistribution are directly linked in most of these models. The larger the net gains, the larger the redistribution. After all, the gains in productive efficiency derive from structural change, which is a process that inherently creates gainers (expanding sectors and the factors employed therein) and losers (contracting sectors and the factors employed therein). It is nonsensical to argue that the gains are large while the amount of redistribution is small–at least in the context of the standard models. Moreover, as trade becomes freer, the ratio of redistribution to net gains rises. Ultimately, trying to reap the last few dollars of efficiency gain comes at the “cost” of significant redistribution of income. Again, standard economics. Saying all this doesn’t necessarily make you very popular right away.\”

On the flexibility of economic modeling in reaching pre-desired conclusions

\”I love an old quote from Carlos Diaz-Alejandro who once said something along the lines of “by now any graduate student can come up with any policy conclusion he desires by building appropriate assumptions into his model.” And that was some thirty years ago! We have plenty more models that generate unorthodox conclusions now.\”

For noneconomists, I guess the obvious question is: \”If economics doesn\’t give a correct and clear answer most of of the time, what good is it?\” I sometimes argue that the main version of economics,  at its best, is that  it is a disciplined way of thinking and arguing that makes clear where people disagree. If two economists disagree, they can unpack each other\’s arguments. Do they disagree in their underlying assumptions? In their model of how those assumptions fit together? In their arguments over cause and effect? In their beliefs about what data to use? In the statistical methods they use? Even when economist end up disagreeing, they should be able to pinpoint the sources of their disagreement–and thus to agree on what issues need to be further researched and resolved. From this process, provisional truths (and is there really any other kind?) do emerge.


250,000 New Permanent Federal Employees?

My perhaps old-fashioned view of government is that it exists to carry out tasks on behalf of the citizenry. Although the government needs to hire people to carry out those tasks, government employees are not a purpose of government; instead, they are a cost of carrying out government tasks. I would like the federal bureaucracy to be well-managed by tough cost-cutters, so that as high a proportion as possible of tax money can flow to program beneficiaries, infrastructure needs, and the like, not government paychecks. Thus, I get a queasy feeling from \”Sizing Up the Executive Branch,\” a January 2013 report from the U.S. Office of Personnel Management.

The figure shows total civilian employment by the federal government in the last eight years. NSFTP, the blue bars, shows Non-Seasonal Full-Time Permanent employees. Other, shown by the red bars,  is part time, seasonal, and nonpermanent employees.

Whenever I see these numbers, the sheer size of federal employment widens my eyes. About 144 million Americans are employed, and more than 1% of them work are civilian employees of the federal government. While unemployment rates have been wrenchingly high for the last five years, government employment has been growing. The number of non-seasonal permanent full-time federal employees rose by about 250,000 from 2006 to 2011–a rise of about 15%–before falling back slightly in 2012.

It\’s plausible that the Great Recession from 2007-2009 required hiring additional government employees, but now that the end of the recession is four years behind us, will we see the federal civilian employment levels drop substantially–say, by 10% or more? Much of what the government does is manage information about programs and people. Developments in information and communications technology should make it possible to do these tasks more efficiently, if they are implemented by thoughtful managers with a cost-cutting focus. No politician publicly advocated a permanent increase of several hundred thousand federal employees, but it just happened anyway.

If you are interested in discussion of the pay of federal employees relative to their private-sector counterparts, see my February 2012 post \”Government Workers: It\’s Not the Wages, It\’s the Benefits.\” 

Note added later: Louis Johnston points out that in the notes under Table 3 of this report, it reads: \”The Department of Defense, Department of Homeland Security, Department of Justice, Department of the Air Force, Department of the Army, Department of the Navy, and the Department of Veterans Affairs have each grown by more than 10,000 employees over the past eight fiscal years. Over the last eight fiscal years those seven agencies have grown by over 230,000 employees.\” However, remember that this report is only civilian employees of the federal government–not armed forces. So most of the increase is the civilian national security apparatus in various forms.