The US Trade Surplus in Services

The US has been running a trade surplus in services for the last few decades, and it\’s getting larger. Here\’s a figure generated with the help of the ever-useful FRED website run by the Federal Reserve Bank of St. Louis. It shows monthly trade data. The blue line shows the monthly US trade deficit in goods. The red line shows the monthly US trade deficit in goods-plus-services. The red line being above the blue line shows that the US trade surplus in services
 

In the 1990s and early 2000s, the US monthly trade surplus in services (that is, the amount the red line is above the blue line) was typically in the range of $8 billion per month: in the last few years, it\’s been more like $18 billion per month. It\’s likely that the predominant area of growth in US trade in the future will come from exporting services–managerial expertise, legal, financial, entertainment, technological, design, logistics–rather than physical objects. This figure shows that exports of services used to be about 27-29% of exports of goods-plus-services combined, but the share of services in total goods-and-services exports is now above 33% and rising.

A fair number of Americans and politicians argue that a trade deficit is in large part a result of unfair trade practices by other countries. Essentially all actual economists disagree with that claim. Economists instead see trade deficits are arising from broad patterns of national production, consumption, and saving. A low-saving economy like the US consumes more than it produces–which it can do by running a trade deficit and importing more than it exports. A high-saving economy produces more than it consumers–which it can do by running a trade surplus and exporting more than it imports. Unfair trade practices can certainly restrict overall flows of trade, but they aren\’t a main cause of trade deficits and surpluses.

That said, I wonder how many of those who think that trade deficits are a result of unfair trade practices by other countries are willing to stick to the logic of their position when it comes to US trade surpluses in services. If trade surpluses are a sign of unfair trade practices, then doesn\’t the ongoing US surpluses in services trade prove that the US is using unfair trade practices in services? My own sense is that too much attention gets focused on whether other countries are trading unfairly, and not nearly enough attention gets focused on how US producers can hook themselves up to the faster economic growth rates happening in the emerging economies of the world.

Still Living in a Fossil Fuel World

An enormous number of pixels are spent on renewable energy, but when one looks at actual numbers, we are still living in a fossil fuel world. Here are some illustrative charts from the most recent annual BP Statistical Review of World Energy, released June 2016.

Here\’s a breakdown of world energy consumption. The big slices are all fossil fuels: green is oil; red is natural gas; grey is coal. The little slices are carbon-free sources of energy: nuclear is light orange, hydroelectric is blue, and renewables are darker orange. As the report notes: \”Oil remains the world’s dominant fuel and gained global market share for the first time since 1999, while coal’s market share fell to the lowest level since 2005. Renewables in power generation accounted for a record 2.8% of global primary energy consumption.\”

Reserves of fossil fuels are not running down. Here are figures showing the years remaining of reserves of oil, natural gas, and coal. The figures show a breakdown by region, which isn\’t especially relevant given that energy can be shipped between regions. Instead, focus on the gray line showing reserves at the world level. Reserves of oil and natural gas, as measured by years remaining, are not declining over time. Reserves of coal are declining, but there is more than century of reserves remaining. It may seem obvious that reserves must decline over time, but technological change doesn\’t just work with renewables like solar and wind. It also finds new sources of fossil fuels and new methods of extraction.

In short, if your environmental goals involve a reduction in the use of fossil fuels over time, this goal is unlikely to happen because the world starts running low on fossil fuels. Instead, it\’s more likely to require some significant policy changes to discourage the use fossil fuels. For a more detailed version of this argument, along with a complementary argument that technological progress by itself is unlikely to drive a smooth shift over to renewable energy sources, a nice starting point is the article by Thomas Covert, Michael Greenstone, and Christopher R. Knittel, \”Will We Ever Stop Using Fossil Fuels?\” in the Winter 2016 issue of the Journal of Economic Perspectives.  (Full disclosure: I\’ve worked as Managing Editor of JEP since the first issue back in 1987.)

Harrod and Keynes on the Scope and Method of Economics

Back in 1938, Roy Harrod delivered the Presidential Address for the British Economic Association on the topic \”Scope and Method of Economics.\” It was published in the September 1938 issue of the Economic Journal, which at the time was edited by John Maynard Keynes, and and thus led to some friendly back-and-forth between them. I\’ll highlight three parts of the exchange here.

1) At the start of Harrod\’s talk, he readily acknowledges the concern, which I hear often among economists, that actually doing economic research is what\’s interesting, and methods can only be judged after their results are known, so talking about what methods should be used in future research is a waste of time. Here\’s Harrod on why many economists actively seek to avoid discussion methodological issues.

In my choice of subject to-day, I fear that I have exposed myself to two serious charges: that of tedium and that of presumption. Speculations upon methodology are famous for platitude and prolixity. They offer the greatest opportunity for internecine strife; the claims of the contending factions are subject to no agreed check, and a victory, even if it could be established, is thought to yield no manifest benefit to the science itself. The barrenness of methodological conclusions is often a fitting complement to the weariness entailed by the process of reaching them.

 Exposed as a bore, the methodologist cannot take refuge behind a cloak of modesty. On the contrary, he stands forward ready by his own claim to give advice to all and sundry, to criticise the work of others, which, whether valuable or not, at least attempts to be constructive; he sets himself up as the final interpreter of the past and dictator of future efforts. …

The principles by which progress in a science proceeds can only be reached by observing that progress. They cannot be deduced a priori or prescribed in advance. …  And for this reason the methodologist is bound to occupy the rear, and not the vanguard. He studies the specific nature of the selected principles after the selection has been made. … The function of the methodologist is to say what it in fact is, or, more strictly, has so far been. The proper and final reply to the would-be reformer is, \” Stop talking and get on with the job; apply your method, and, if it is productive, you will be able to display your results.\”

2) There\’s a widespread quick-and-dirty version of the relationship between theory and empiricism in economics, which is that one first creates theories, tests those theories with data, and then iterates with new theories and empirical tests. But in the 21st century, I\’m not sure anyone really believes this. It\’s well-known that you can create an internally consistent theory to reach pretty much any conclusion you want, as long as you tinker with the underlying assumptions. Moreover, it\’s well-known that when doing empirical work, one can try out a bunch of different statistical tests until you find one that reaches the conclusion you want. To make matters worse, there\’s no particular reason to believe that if some particular economic theory is validated by some particular empirical estimate in one context that it will also hold true in all other times and places. These concerns prove the case that a social science is not a natural science, but it would be as severe overreaction to hype them up into a claim that social sciences can\’t lead to meaningful knowledge. Much of Harrod\’s essay is aimed at an argument that economists can and should strive for general insights applicable in a number of settings. I\’ll include here a quotation from Keynes, in their correspondence, in which Keynes argues for a central task for economists is weeding through models and choosing the applicable one–and that empirical tests offer only limited help in this task.

This is part of a letter from Keynes to Harrod, upon receipt of the draft of Harrod\’s talk, dated July 4, 1938. It is taken from a website, \”The Collected Interwar Papers and Correspondence of Roy Harrod,\” edited by Daniele Besomi. It\’s letter #787 at that website, and it\’s easy to click from letter to letter through the correspondence.

Progress in economics consists almost entirely in a progressive improvement in the choice of models. … But it is of the essence of a model that one does not fill in real values for the variable functions. To do so would make it useless as a model. For as soon as this is done, the model loses its generality and its value as a mode of thought. … The object of statistical study is not so much to fill in missing variables with a view to prediction, as to test the relevance and validity of the model.

Economics is a science of thinking in terms of models joined to the art of choosing models which are relevant to the contemporary world. It is compelled to be this, because, unlike the typical natural science, the material to which it is applied is, in too many respects, not homogeneous through time. The object of a model is to segregate the semi-permanent or relatively constant factors from those which are transitory or fluctuating so as to develop a logical way of thinking about the latter, and of understanding the time sequences to which they give rise in particular cases.

Good economists are scarce because the gift for using \”vigilant observation\” to choose good models, although it does not require a highly specialised intellectual technique, appears to be a very rare one.

3) Finally, Harrod closes his essay by responding to an argument that economists often hear from social reformers, which is basically that the economists should stop playing intellectual games, and become part of the movement for needed change. Harrod\’s response is that there are lots of articulate advocates for social change, but what economists can bring to the party is a set of general lessons about the likely causes of social problems and the likely effects of proposed reforms. (The last part of the quotation below, after the final ellipses, actually appears earlier in the essay, but I inserted it at the end here for clarity.) Here\’s Harrod:

Zealous humanitarians may be impatient for quick results. All men of goodwill may see without more ado that there is much amiss with the world. Should not social students postpone their abstruse intellectual problems, of fascination mainly to themselves, and get together in a sort of academic tea-party to list our known abuses and our known resources and arrive at a programme of reform on the basis of mutual goodwill ? And do they not in fact, so the critic proceeds, bury themselves in unintelligible jargon, because they fear that, if they proceeded with their more immediate duties, they would disturb vested interests, incur social odium and signally fail to feather their own nests ?

The criticism misconceives the duty of the student and the true source of his power for good. It may be the case that much could be put to rights without further scientific knowledge. But the sociologist will agree that if known abuses are not redressed it is not for lack of a catalogue of them, or even for lack of men of goodwill. … [H]is experience will lead him to suspect that the equilibrium is not likely to be shattered by the breath of an academic tea-party. Nor have academic students a monopoly of goodwill or the power to express it.

Only in one way can the academic man change the shape of things, and that is by projecting new knowledge into the arena. In goodwill he may partake in greater or less degree along with more practical persons, and he is at liberty to join with them in political parties or social-welfare groups. His specific contribution is the enlargement of knowledge, and particularly of the knowledge of general laws. … To reach general laws it is usually necessary to abandon the straightforward terms of common sense, to become immersed for a time in mysterious symbols and computations, in technical
and abstruse demonstrations, far removed from the common light of day, in order to emerge finally with a generalisation which may then be re-translated into the language of the workaday world.

Accessing the Financial System

When it comes to processing transactions and borrowing money, the financial system works beautifully for my family. The bank doesn\’t charge us for depositing checks or withdrawing money, and we have enough money in the bank to qualify for a \”free\” checking account. The annual fee for our credit cards is smaller than the benefits they offer in terms of airline tickets, hotel rooms, and money back, so that as long as we make the payments on time, we come out ahead each year.  Getting a  home mortgage or a car loan is straightforward. If I needed a serious chunk of emergency cash, I could take out a home equity loan.

But for many low-income Americans, the financial system imposes additional costs that make it a harder for households that are already struggling to make ends meet. If they don\’t have a bank they pay for cashing checks and they pay for money orders. If they do have a bank account or credit cards, they often end up paying overdraft fees at the bank or late fees to the credit card company. Getting a home loan or a car loan is somewhere from difficult-and-costly to impossible. If they need short-term cash, they end up turning to pawn shops, payday loans. The Council of Economic Advisers offers some background on these issues in its June 2016 Issue Brief, \”Financial Inclusion in the United States.\” 

As a starting point, here are a couple of figures showing the trends in whether households have a \”transaction account\”–basically, a bank account. The share of households with a bank account is rising, but for families in the lower income percentiles, or for families with lower education levels (of course, these are often the same families), the chance of having a bank account remains substantially lower.


These families face out-of-pocket costs when dealing with the financial system. 

The unbanked pay anywhere from 1 to 5 percent in fees to cash a check (depending in part on whether it is a paycheck or a government check since the latter come with lower risk for the check-casher). At an annual salary of $22,000 (the average for unbanked households), such fees can total over $1,000 a year in extra costs for unbanked households. …

Many of the households that fall into this category have impaired credit histories and would fall into the “subprime” category. The products and services that these individuals may obtain from both bank and non-bank providers typically include money orders, check cashing, remittances, payday loans, auto title loans, and pawn shop loans (collectively known as small-dollar credit). …

Unbanked and underbanked households also face additional costs due to their reliance on other, non-mainstream financial services, such as payday loans (used by roughly 5 percent of households in 2013) and auto title loans (used by roughly 0.6 percent of households in 2013), among other forms of so-called small dollar credit. These products’ costs can be quite substantial—anywhere from $10 to $30 in extra costs per $100 borrowed in the case of payday loans. 

My main quibble with the report is its use of the currently fashionable term of \”financial inclusion.\” As the text of the report notes, the underlying issues here are more prosaic and obvious than a broad term like \”inclusion\” might suggest.

Despite the many benefits to financial inclusion, there are a number of barriers. One potential barrier is the upfront cost associated with opening a bank account, including fees associated with opening an account, minimum balance requirements, and gathering the necessary documentation and proof of identity, as well as the opportunity costs associated with opening an account or traveling to a bank branch to conduct business … 

There are ways to address some of these issues, like the widespread shift toward having government checks paid directly to bank accounts, rather than delivered through the mail, which can create a tie to a bank for those who have not previously had such a connection.  But most of the new financial technology firms or smartphone applications that one reads about require overcoming these more basic hurdles.

In my own thinking, there are two issues here that are somewhat separable. One is being able to carry out transactions like receiving checks or paying bills without needing to pay much in transactions costs for doing so. The other question, and it\’s a harder question, is about what happens to low-income Americans who need short-term loans. Because of low and sporadic income levels, they are not good candidates for conventional loans, and thus end up turning to the \”small-dollar credit\” options above. Not only do these short-term lending options have relatively high up front costs, if they are not repaid on time they often impose additional costs. But what\’s to be done? As the report notes:

Despite the potentially adverse impacts that arise from a reliance on unconventional sources of small dollar credit, these products provide a source of funds for households who might not otherwise be able to cover such crucial but often unanticipated expenses as emergency medical treatment, funeral and burial preparations, and urgent home or automobile repairs, or sometimes even to cover regular expenses. These types of products often take the place of conventional accounts. … Moreover, some households depend on access to small-dollar credit not just for big, one-time expenditures but also for covering general, day-to-day living expenses, such as rent and utility payments. One survey found that 2 in 5 title loan users report using the borrowed funds for rent or utilities while about a quarter of borrowers used the loan for medical expenses or car repairs. 

It\’s easy enough to deplore how these small-dollar credit institutions operate, and how people start by taking out one loan and end up trapped in a web of ever-growing debts. (Incidentally, the public sector isn\’t innocent here, either, with a growing practice in recent years of imposing ever-greater fines, and then additional fines for not paying the earlier fines on time, which can weigh heavily on low-income families.) I\’ve written here in the past about the issues with payday loans, in particular (for example, here, here and here).

But those with low incomes who are facing a sudden need for cash–an emergency medical costs, or a car repair that\’s the difference between getting to work or not, or the possibility of having their power and water cut off, or being evicted from an apartment–may not have any especially appealing options. Before acting to limit or reduce the options that they do have, like the recent rules that seek to limit payday lending, it\’s useful to think about what is going to happen to some of the people who can\’t get that loan they need.

Aaron Klein of the Brookings Institution offers a useful primer on these issues in \”Understanding non-prime borrowers and the need to regulate small dollar and \”payday\” loans,\” a short, readable paper published on-line on May 19, 2016.

Some Economics of Stop and Frisk

If almost every time the police stopped and frisked someone, they found an illegal weapon or drugs, or identified a wanted criminal, then complaints about the practice would ring hollow. On the other hand, if the police almost never found evidence of a crime during a stop and frisk, then complaints about the practice would take on a sharpened urgency.

Other evidence would be nice, too. It would be nice to know on what grounds the police are making decisions to stop and frisk, and whether some reasons for stop and frisk are more likely or less likely to lead to evidence of a crime. It would be nice to know the extent to which the well-known racial differences in stop-and-frisk are related to the practice occurring more in higher poverty, higher crime areas, which also have a racial imbalance. It would be nice to have some evidence on whether stop-and-frisks are more likely to lead to evidence of a crime for whites or blacks.

Sharad Goel, Justin M. Rao And Ravi Shroff offer some evidence and analysis on these kinds of questions in their research paper \”Precinct Or Prejudice? Understanding Racial Disparities In New York City’s Stop-And-Frisk Policy,\” which appeared earlier this year in  the Annals of Applied Statistics (2016, 10:1, 365–394).

The authors have data on 2.9 million stops conducted by New York City police officers between 2008 and 2012. They write:

\”Following a stop, officers complete a UF-250 stop-and-frisk form, recording various aspects of the stop, including demographic characteristics of the suspect, the time and location of the stop, the suspected crime and the rationale for the stop (e.g., whether the suspect was wearing clothing common in the commission of a crime). … After an individual is stopped, officers may conduct a frisk (i.e., a quick patdown of the person’s outer clothing) if they reasonably suspect the individual is armed and dangerous; officers may additionally conduct a search if they have probable cause of criminal activity. Frisks and searches occur in 56% and 9% of cases, respectively. An officer may decide to make an arrest (6% of instances) or issue a summons (6% of instances), all of which is recorded on the UF-250 form.\”

Of course, it\’s sensible to be skeptical about the quality of this evidence. For example, one might raise questions about how frequently or accurately these UF-250 forms are filled out. One answer to this concern is that because of past court cases, the NYPD has some explicit emphasis on filling out the forms, and filling them out accurately. Also, with data on several million forms, one should be able to learn something, even if lessons should be drawn with appropriate caution.

The researchers focus most of their discussion in this study on the 760,000 cases where the reason for the stop was suspicion of criminal possession of a weapon. This group of stops is useful to study because it\’s the largest single reason for such stops, and because the data shows whether a weapon was actually found, or not, which focuses on a specific crime, rather than jumbling all crimes together. The authors look at data mostly from 2009-2010, and calculate what factors listed on the UF-250 form–including personal characteristics of the suspect, the specific factors that the officer observed that led to the stop, and the location as determined by the police precinct– make it more or less likely that a weapon was actually found.  The result is a big messy statistical calculation, which for 2009-2010 includes 301,000 stops and 7,705 different variables (the large number of variables is because they look at a bunch of potential variables both individually and in how the variables might interact with each other).

Here\’s the payoff:  The authors can use the answers from their calculations on the first few years of the data to predict how likely any given stop-and-frisk looking for criminal possession of a weapon was to find such a weapon in the 2011-2012 data. For example, if an officer in a certain precinct stopped someone for criminal possession of a weapon because the person was acting furtively, and wearing suspicious clothing at a certain time of day in a certain  precinct, what was the chance (based on data from the earlier time period) that the person actually had a weapon? If an officer in another precinct stopped someone for criminal possession of a weapon who was near a crime scene, fitted a witness report, and changed direction when the officer came into view, what is the chance that that person actually had a weapon?  The researchers write (citations, footnotes, and references to figures omitted for readability):

We find that in 43% of the approximately 300,000 CPW [criminal possession of a weapon] stops between 2011 and 2012, there was at most a 1% chance of finding a weapon on the suspect. We note that the recovered weapons are typically knives, with guns constituting approximately 10% of found weapons. …

In particular, consistent with past results, the overall hit rates for blacks and Hispanics (2.5% and 3.6%, resp.) are considerably lower than for whites (11%). In other words, these results indicate that when blacks and Hispanics are stopped, it is typically on the basis of less evidence than when white suspects are stopped. Moreover, while 49% of blacks stopped under suspicion of CPW [criminal  possession of a weapon] have less than a 1% chance of in fact possessing a weapon, the corresponding fraction for Hispanics is 34%, and is just 19% for stopped whites. Thus, if we equate reasonable suspicion with a particular probability threshold (say 1%), a far greater fraction of stops of blacks and Hispanics are unwarranted than are stops of whites. … 

However, … whites and minorities are typically stopped in different contexts, and so differing hit rates may not be the result of racial bias. Indeed, as we discuss below, stop-and-frisk is an extremely localized tactic, heavily concentrated in high-crime, predominantly black and Hispanic areas, and so lower tolerance for suspicious activity (and hence lower hit rates) in these areas could account for the racial disparity. …
[T]here is an almost one-to-one correspondence between areas with heavy use of stop-and-frisk. While this is a natural and possibly effective policing strategy, a consequence of the tactic is that individuals who live in high-crime areas, but who are not themselves engaged in criminal activity, bear the costs associated with being stopped. … [T]hese high-crime areas are overwhelmingly black and Hispanic. Accordingly, the cost of stop-and-frisk is largely shouldered by minorities. … [W]e see that the racial composition of stopped individuals is similar to the racial composition of the neighborhoods in which stop-and-frisk is heavily employed. Thus, the striking racial composition of stopped CPW [criminal possession of a weapon] suspects (61% are black, 30% are Hispanic and 4% are white) appears at least qualitatively attributable to selective use of stop-and-frisk in minority-heavy areas …

In a more detailed analysis of the data, they find that the differing use of stop-and-frisk across neighborhoods accounts for part of gap by which blacks and Hispanics are stopped and frisked more than whites, but not for all of it. 

Another intriguing aspect of the study is that it can answer the question of what reasons–and remember, these are the reasons given by the police themselves–are more likely to uncover a concealed weapon. The UF-250 report lists 18 specific \”stop circumstances\” (there\’s also a category for \”other,\” which they ignore). The 18 circumstances are: suspicious object, fits description, casing, acting as lookout, suspicious clothing, drug transaction, furtive movements, actions of violent crime, suspicious bulge, witness report, ongoing investigation, proximity to crime scene, evasive response, associating with criminals, changed direction, high crime area, time of day, sights and sounds of criminal activity.

The question is whether some of these are more likely, as revealed by the actual evidence, to lead to actual discovery of a criminal concealed weapon than others. The authors look at these 18 factors together with each of the 77 police precincts and also whether the stop-and-frisk happened at a public housing location, a transit stop, or elsewhere. Notice that this is not an exercise in 20:20 hindsight: instead, it\’s looking at the circumstances that police actually reported seeing at the time, and then seeing what worked. Basically, they find that three  of the circumstances were good predictors of a criminal concealed weapon: suspicious object, sights and sounds of criminal activity, suspicious bulge. The other 15 circumstances were either barely connected to finding a concealed weapon, or not connected at all.

An obvious policy choice suggests itself here. The NYPD are often using stop-and-frisk to look for weapons based on the policy observing certain circumstances that are quite unlikely to be associated with a concealed weapon. If the NYPD no longer stopped people on suspicion of  suspicion of criminal possession of a weapon based on furtive movements, acting as a lookout, changing direction, and many of the other reasons given, it could focus on the circumstances that are more likely to actually end up finding a concealed weapon. The authors write:

In particular, we show that one can recover 50% of weapons by conducting only the 6% of CPW [criminal possession of a weapon] stops with the highest ex ante hit rate, and 90% of weapons by conducting 58% of CPW stops. These ex ante hit rates are based only on information observable to officers prior to the stop decision, and so it is at least in theory possible to implement such a strategy. Further, since low hit rate stops disproportionately involve blacks and Hispanics, optimizing for weapons recovery would simultaneously bring more racial balance to stop-and-risk.To facilitate adoption of such strategies by police departments, we develop stop heuristics that approximate our full statistical model via a simple scoring rule. Specifically, we show that with a rule consisting of only three weighted stop criteria, one can recover the majority of weapons by conducting 8% of stops.

An obvious concern about these results is that perhaps stopping and frisking people for criminal possession of a weapon is just an excuse, but it\’s nonetheless a useful excuse for reducing the rate of crime. The authors discuss the point this way:

A possible objection to our approach is that even for CPW [criminal possession of a weapon] stops, recovering weapons is not the only—or perhaps not even the primary—goal of the police. Officers, for example, may simply consider stops a way to advertise their presence in the neighborhood or a means to collect intelligence on criminal activity in the area, regardless of how many weapons are directly recovered. Stops conducted for these alternative motives could quite plausibly deter individuals from carrying weapons and might lead to information helpful in solving cases, both of which presumably would lower the incidence of violent crime over time. In the instances we consider, however, the explicitly stated reason for a stop is suspicion of criminal possession of a weapon, not one of the various other reasons that may or may not withstand legal or public scrutiny, and so it seems most natural to consider whether individuals were in fact likely to be carrying weapons. Moreover, as we have previously noted, simply because a strategy may be effective does not make it legal. … A related worry is that “criminal possession of a weapon” is a catchall category for a variety of criminal offenses, and so by focusing on whether a weapon was found, we underestimate the value of a stop. Addressing this issue, we observe that our results are qualitatively similar if we instead use arrests [for any reason] as the outcome variable, mitigating cause for concern.

The authors instead draw the conclusion that stop-and-frisk can be a useful tool for police work, but when it comes to criminal possession a weapon, it\’s a tool that\’s being overused. They conclude:

By focusing on the relatively small number of high hit rate situations—situations that can be reliably identified via statistical analysis—one may be able to retain many of the benefits of stop-and-frisk for crime prevention while mitigating constitutional violations. This observation has the potential to not only improve New York City’s stop-and-frisk program, but could also aid similar policies throughout the country.

The Big Question: Has Robust Growth Deserted Us?

Aaron Steelman and John A. Weinberg provide a useful overview of the biggest long-term economic question facing the US economy–and arguably the economies of the other high-income countries of the world as well–in their essay \”A “New Normal”? The Prospects for Long-term Growth in the United States.\”  The essay appears in the just-released 2015 Annual Report of the Federal Reserve Bank of Richmond.

The essay has a nice readable overview of how economists have thought about the fundamental determinants of growth from the work of Robert Solow up through modern economists like Paul Romer, Charles Jones, and others. Here, I\’ll focus instead on how some prominent thinkers have phrased the current challenges of long-run growth. Here\’s the stage-setter:

\”To measure improvement in average standards of living, growth of GDP per capita is the standard yardstick. The post-war average of 3.4 percent overall growth translated to an average growth rate per capita of about 2.1 percent. … Since 2010, the U.S. economy has grown at a rate of roughly 2.1 percent annually, which translates to an average growth rate per capita of about 1.3 percent, both well below the post-World War II rates prior to the Great Recession and, perhaps more notably, far below what has been seen in “catch-up” periods following previous significant downturns.\”

What\’s the case for pessimism that the US is going to experience a long-term slowdown of long-run growth? The essay brings out the themes by discussing the work of two economists: Tyler Cowen and Robert Gordon.

\”As Tyler Cowen, an economist at George Mason University, put it in his 2011 book The Great Stagnation, the United States has “built social and economic institutions on the expectation of a lot of low-hanging fruit, but that fruit is mostly gone” and has been since roughly the early 1970s. In particular, he identifies three types of increasingly scarce “fruit”: free land, technological breakthroughs, and smart but relatively uneducated kids. Regarding the first, until the beginning of the 20th century, free and fertile American land was plentiful and not only “did the United States reap a huge bounty from the free land (often stolen from Native Americans, one should not forget), but abundant resources helped the United States attract many of the brightest and most ambitious workers from Europe,” Cowen writes. “Taking in these workers, and letting them cultivate the land, was like plucking low-hanging fruit.” Second, Cowen also sees technological innovation, and especially breakthroughs, as slowing. “Life is better and we have more stuff, but the pace of change has slowed down compared to what people saw two or three generations ago.” Third, in 1900, a very small percentage of Americans graduated from high school, while estimates of high school completion today range from roughly 75 percent to 90 percent. … 

\”In addition to a slowing rate of innovation, [Robert] Gordon, as noted before, argues that the U.S. economy faces four big headwinds. First, there’s rising income inequality, which has reduced the share of economic gains going to the middle and working classes and with it their disposable income and purchasing power. Second, growth in educational attainment as measured by years of schooling completed has slowed and, among some parts of the population, decreased since 1970. In addition, the quality of primary and secondary education has become more stratified and the costs of higher education has increased. Such trends in education are themselves a contributor to the first headwind, growing income inequality. Third, the United States is experiencing significant demographic changes, most significantly many baby boomers are reaching traditional retirement age. That has reduced the number of hours worked per person. In addition, labor force participation among people who have not yet reached retirement age has dropped. Fourth, federal, state, and local governments face mounting debt, in large measure due to the aging of the population, as spending on “entitlement” programs such as Social Security and Medicare increases and pension obligations to public-sector employees grow. Gordon identifies two additional headwinds, which he thinks could be barriers to growth, though they are hard to quantify: “globalization,” which could add to growing income inequality, and global warming and other environmental issues, which could require significant resources to address.\”

That\’s a formidable list of concerns. But it\’s worth remembering that the term \”secular stagnation\” was first used by economists back in the 1930s, and the main concerns at the time included a lack of investment because of–at that time–lack of invention, lack of new resources and land, and slow population growth. But relatively faster growth arrived in the decades that followed, nevertheless. For example, it\’s notoriously difficult to predict whether and when new technologies will arrive, or how they will be commercialized. The essay notes:

[E]conomic historian Joel Mokyr …  argues that there are many areas of science in which significant discoveries seem promising, among them molecular microbiology, astronomy, nanochemistry, and genetic engineering. And while it is true that there is no automatic mechanism that turns better science into improved technology, “there is one reason to believe that in the near future it will do so better and more efficiently than ever before. The reason is access.” Meaning, searching for vast amounts of information has become fast, easy, and nearly costless for researchers. Not only is the era of “Big Data” here but the ability to parse through the most arcane of data is no longer burdensome for people working on the frontiers of knowledge. On the question of whether all the low-hanging fruit has been picked, Mokyr argues that the analogy is flawed. As he puts it, science “builds taller and taller ladders, so we can reach the upper branches, and then the branches above them.” In other words, when a technological solution for a problem is found it often creates a new problem, which creates a new problem, and so on. “Each solution perturbs some other component in the system and sows the seed of more needs; the ‘demand’ for new technology is thus self-sustaining.”

Some of the issues that may be leading to slower growth are hard to move with policy tools, like lower birth rates and an aging population. But other factors affecting growth might be shifted. When I look at the state of US K-12 education, and higher education for that matter, it seems to me that the United States has substantial possibilities for gains in cognitive and noncognitive skills. The US could use a dramatic rise in research and development spending.  Infrastructure investments could have substantial payoffs, although I\’ve argued on this blog that while fixing roads and bridges is fine, US economic growth in the 21st century is more likely to rely on information-related and energy infrastructures, along with planes and trains and ships, rather than just pavement and asphalt. The US economy has seen a decline over several decades in the rate of new-business startups, which in turn hinders job creation.

The optimistic case for at least a modest resurgence of long-run growth often starts by pointing out that when a major financial crises and a Great Recession hit at the same time, the recovery is likely to be slow, because so many firms and households all need to address the overload of debt they had previously built up. It then points to the possibilities of new technologies, intermixing and spreading in an interconnected global economy. And the optimistic view hopes for a supportive policy environment along the way. For example, this is more-or-less the position taken by Fed chair Janet Yellen in a speech earlier this week:

There is some evidence that the deep recession had a long-lasting effect in depressing investment, research and development spending, and the start-up of new firms, and that these factors have, in turn, lowered productivity growth. With time, I expect this effect to ease in a stronger economy. I also see no obvious slowdown in the pace or the potential benefits of innovation in America, which likewise may bear fruit more readily in a stronger economy. In the meantime, it would be helpful to adopt public policies designed to boost productivity. Strengthening education and promoting innovation and investment, public and private, will support longer-term growth in productivity and greater gains in living standards.

One other angle on the question of the growth slowdown involves what is being measured: specifically, are many people experiencing gains in their standard of living that aren\’t well-measured in the economic statistics? Steelman and Weinberg quote Angus Deaton on this point:

I…challenge the proposition that the information revolution and its associated devices do little for human well-being. Many have documented the importance of spending time and socializing with friends and family, but this is exactly the feature of everyday life that the new communication methods work to enhance. All of us can remain in touch with our children and friends throughout every day, videoconferencing is essentially free, and we can cultivate close relationships with people who live thousands of miles away. When my parents said good-bye to relatives and friends who left Scotland to look for better lives in Canada and Australia, they never expected to see or talk to them again, except perhaps for a brief and astronomically expensive phone call when someone died. Today, we often do not even know where people are physically located when we work with them, talk to them, or play with them. We can also enjoy the great human achievements of the past and the present, cheaply accessing literature, music, and movies at any time and in any place. That these joys are not captured in growth statistics tells us about the growth statistics, not about the technology. 

My guess is that economic statistics have often understated the actual gains to individuals in the past, as well. The widespread presence of television, telephone, radio, and even the humble photograph and book allowed people to be in touch with others and with what Deaton calls \”great human achievements of the past and present\” in new ways, too. But the new information technologies have certainly ramped up these possibilities to a whole new level.

A final broad issue underlying economic growth is whether we wish to be a society that expects change, embraces change, and is designed to facilitate the dislocations of persistent change–or not.

When Finance Becomes Self-Referential

The Bank of International Settlements has just published a group of working papers based on its annual conference held in June 2015. John Kay delivered the keynote address on the subject: \”Finance is just another industry,\” which appears as part of \”Towards a “new normal”in financial markets?\” which include the speech by John Kay and essays by Jaime Caruana and Paul Tucker, BIS Papers No 84, May 2016.

Kay\’s keynote address is lively to read and full of vivid examples and metaphors. For me, a main theme is that what most of us think about \”finance\” can be divided into two parts: the part that directly helps actual people and firms and governments operate in the real world, and the part where the financial sector becomes self-referential and starts to interact largely with itself. Of course, this distinction is more like a spectrum, where one category shades into the other, rather than a black-and-white binary distinction. But the distinction is useful nonetheless. Here are some thoughts from Kay:

On the contributions that finance makes to society.

Finance can contribute to society and the economy in four principal ways. First, the payments system is the means by which we receive wages and salaries, and buy the goods and services we need; the same payments system enables business to contribute to these purposes. Second, finance matches lenders with borrowers, helping to direct savings to their most effective uses. Third, finance enables us to manage our personal finances across our lifetimes and between generations. Fourth, finance helps both individuals and businesses to manage the risks inevitably associated with everyday life and economic activity. These four functions – the payments system, the matching of borrowers and lenders, the management of our household financial affairs, and the control of risk – are the services which finance does, or at least can, provide. The utility of financial innovation is measured by the degree to which it advances the goals of making payments, allocating capital, managing personal finances and handling risk. Most people who work in finance are concerned with the first two of these functions. They operate the payments system, they help households with their personal finances. They are not aspiring Masters of the Universe. Mostly, they earn modest salaries. Half of the employees of Barclays Bank earn less than £25,000 ($40,000) per year. But Barclays also employs 530 “code staff” – people with executive functions – who earn an average of £1.3m each, and there are 1443 who earn more than £500,000 ($800,000). It is likely that “the one per cent” in Barclays Bank earn a total approaching half of the total wage and salary bill of the bank. Most of these people are employed in wholesale rather than retail finance. Their activities relate mainly to the other objectives of the financial system – capital allocation and risk management.

Although one of the functions of the financial sector is that it \”matches lender with borrowers,\” Kay points out that a substantial part of investment–whether it\’s a physical capital investment or an investment in research and development and firm-specific skills for employees–is financed internally by firms out of their profits, not as a result of financial sector interactions.

ExxonMobil is both the most profitable company in the United States and the biggest private investor. Massive expenditure on exploration and development and on infrastructure is necessary every year to exploit new energy resources and bring oil products to market. In 2013, ExxonMobil invested $20 billion. That figure was in itself a significant fraction of total investment by US corporations. Exxon got all of that money from its own internal resources. In 2013, ExxonMobil spent $16 billion buying back its own shares, in addition to the $11 billion the company paid in dividends to shareholders. The company’s short- and long-term debt levels were virtually unchanged. It raised no net new capital at all. Nor was 2013 an exceptional year. Over the five years up to and including 2013, the activities of the corporation generated almost $250 billion in cash, around twice the amount it invested. ExxonMobil did not raise any new capital in these five years either. Instead the company spent around $100 billion buying back securities it had previously issued. Oil exploration, production and distribution are capital-intensive. Many modern companies need very little capital. The stock market capitalisation of Apple – the total market value of the company’s shares – is over $500 billion. Although the corporation has large cash balances – currently around $150 billion – it has few other tangible assets. Manufacturing is subcontracted. Apple is building a new headquarters building in Cupertino at an estimated cost of $5 billion which will be its principal physical asset. The corporation currently occupies a variety of properties in that town, some of them owned, others leased. The flagship UK store on London’s Regent Street is jointly owned by the Queen and the Norwegian sovereign wealth fund. Operating assets therefore represent only around 3% of the estimated value of Apple’s business. 

At some point, Kay argues, financial markets can become self-referential. For example,

We need a finance sector to manage our payments, finance our housing stock, restore our infrastructure, fund our retirement and support new business. But very little of the expertise that exists in the finance industry today relates to the facilitation of payments, the provision of housing, the management of large construction projects, the needs of the elderly or the nurturing of small businesses. The process of financial intermediation has become an end in itself. The expertise that is valued is understanding of the activities of other financial intermediaries. That expertise is devoted not to the creation of new assets, but to the rearrangement of those that already exist. High salaries and bonuses are awarded not for fine appreciation of the needs of users of financial services, but for outwitting competing market participants. In the most extreme manifestation of a sector which has lost sight of its purposes, some of the finest mathematical and scientific minds on the planet are employed to devise algorithms for computerised trading in securities which exploit the weaknesses of other algorithms for computerised trading in securities. …

Nothing illustrates the self-referential nature of the dialogue in modern financial markets more clearly than this constant repetition of the mantra of liquidity. End users of finance – households, non-financial businesses, governments – do have a requirement for liquidity, which is why they hold deposits and seek overdraft or credit card facilities and, as described above, why it is essential that the banking system is consistently able to meet their needs. But these end users – households, non-financial businesses, governments – have very modest requirements for liquidity from securities markets. Households do need to be able to realise their investments to deal with emergencies or to fund their retirement, businesses will sometimes need to make large, lumpy investments, governments must be able to refinance their maturing debt. But these needs could be met in almost all cases if markets opened once a week – perhaps once a year – for small volumes of trade. … The need for extreme liquidity, the capacity to trade in volume (or at least trade) every millisecond, is not a need transmitted to markets from the demands of the final users of these markets, but a need, or a perceived need, created by financial market participants themselves. People who applaud traders for providing liquidity to markets are often saying little more than that trading facilitates trading – an observation which is true, but of very little general interest.

The questions becomes how to assure that the financial system is resilient and robust, so that when the masters of finance are chasing their own tails, the rest of the economy isn\’t upset. In any complex system, trying to assure that no component will ever fail is a foolish goal. Failures are going to happen; in the financial system, banks and other financial institutions are going to fail sometimes. The goal needs to be to create a system that is resilient when such failures occur. Here\’s Kay:

The organisational sociologist Charles Perrow has studied the robustness and resilience of engineering systems in different contexts, such as nuclear power stations and marine accidents.16 Robustness and resilience require that individual components of the system are designed to high standards. Demands for higher levels of capital and liquidity are intended to strengthen the component units of the financial system. But the levels of capital and liquidity envisaged are inadequate – laughably inadequate – relative to the scale of resources required to protect financial institutions against panics such as the global financial crisis. More significantly, resilience of individual components is not always necessary, and never sufficient, to achieve system stability. Failures in complex systems are inevitable, and no one can ever be confident of anticipating the full variety of interactions which will be involved. Engineers responsible for interactively complex systems have learnt that stability requires conscious and systematic simplification, modularity which enables failures to be contained, and redundancy which allows failed elements to be bypassed. None of these features – simplification, modularity, redundancy – were characteristic of the financial system as it had developed in 2008. On the contrary. Financialisation had greatly increased complexity, interaction and interdependence. Redundancy – as, for example, in holding capital above the regulatory minimum – was everywhere regarded as an indicator of inefficiency, not of resilience. 

BIS has also put the other papers delivered at its annual conference last summer up online. In particular, the paper by Andrew Lo, \”Moore\’s Law vs. Murphy\’s Law in the financial system: who\’s winning?\” dovetails nicely with some of the themes raised by Kay, and digs into some of the same issues. From the abstract of Lo\’s paper:

\”Breakthroughs in computing hardware, software, telecommunications and data analytics have transformed the financial industry, enabling a host of new products and services such as automated trading algorithms, crypto-currencies, mobile banking, crowdfunding and robo-advisors . However, the unintended consequences of technology-leveraged finance include firesales, flash crashes, botched initial public offerings, cybersecurity breaches, catastrophic algorithmic trading errors and a technological arms race that has created new winners, losers and systemic risk in the financial ecosystem. These challenges are an unavoidable aspect of the growi\”ng importance of finance in an increasingly digital society. Rather than fighting this trend or forswearing technology, the ultimate solution is to develop more robust technology capable of adapting to the foibles in human behaviour so users can employ these tools safely, effectively and effortlessly.\” 

Here are the rest of the links:

\”Mobile collateral versus immobile collateral,\” BIS Working Papers No 561
by Gary Gorton and Tyler Muir
Comments by Randall S Kroszner and Andrei Kirilenko

\”Expectations and investment,\” BIS Working Papers No 562
by Nicola Gennaioli, Yueran Ma and Andrei Shleifer
Comments by Philipp Hildebrand

\”Who supplies liquidity, how and when?\” BIS Working Papers No 563
by Bruno Biais, Fany Declerck and Sophie Moinas
Comments by Arminio Fraga and Francesco Papadia

\”Moore\’s Law vs. Murphy\’s Law in the financial system: who\’s winning?\” BIS Working Papers No 564
Andrew W Lo
Comments by Darrell Duffie and a written contribution by Benoît Coeuré

Update on Remittances

Remittances refers to money that emigrants send back to their country of origin, and in the context of global capital flows, they are a big deal. The report Migration and Remittances: Recent Developments and Outlookpublished by Global Knowledge Partnership on Migration and Development (KNOMAD) in April 2016, offers an overview of trends and patterns. (For the record, KNOMAD is funded by the governments of Germany, Sweden, and Switzerland, and administered by the World Bank.)

Consider four of the ways in which capital can move to the economies developing countries: 1) foreign direct investment (that is, having an ownership share in the investment); 2) private and equity; 3) official development assistance; and 4) remittances. Back in the mid-1990s, these four were all very roughly the same size, or at least within shouting distance of each other. But while official development assistance has risen a bit, the other three categories have all grown dramatically, as shown in the figure.

Remittances are about the same size as flows of private debt and equity. And while remittances haven\’t grown as fast as foreign direct investment, it has risen much more steadily and without the peaks and valleys. Remittances aren\’t driven by the fluctuations and trend-chasing that affect foreign direct investment and private investments in debt and equity. But remittances are affected by broad economic changes: for example, the fall in the price of oil means that remittances from Russia dropped, while the improvement in the US economy in the last few years has helped to raise remittances moving from the US to Latin America.But the

One reason for the rise in remittances is just that the total number of who have migrated to a different country has risen–although the rise in the total global number of migrants of about 50% in the last 20 years doesn\’t come close to explaining the rise in remittances during that time.

Another change is that it has become cheaper and easier over time for a worker to send funds back to his or her country of origin. The decline in the last few years is mild, but real. However, I suspect that if comparable statistics were available for the 1990s, the task of sending funds home would be considerably more complex and costly.

Remittances don\’t get as much attention as they deserve. We\’re often focused on the international capital flows connected with government, or with big global banks and businesses. But while remittances are very large overall at $433 billion in 2015, they mainly consist of relatively small-scale payments happening between family and community networks. In many cases, remittances provide the cash for ordinary families to start a small business or pay for extra education. The report points out that when there is a natural disaster in many countries, remittances to that country rise substantially as people around the world help out.

For a more detailed overview of this topic, I recommend Dean Yang\’s article on \”Migrant Remittances\” in the Spring 2011 issue of theJournal of Economic Perspectives as a starting point. (Full disclosure: I\’ve labored in the fields as Managing Editor of JEP since the first issue in 1987.)

The Collective Action Problem of Resistance to Antibiotics

\”We estimate that by 2050, 10 million lives a year and a cumulative 100 trillion USD of economic output are at risk due to the rise of drug-resistant infections if we do not find proactive solutions now to slow down the rise of drug resistance. Even today, 700,000 people die of resistant infections every year. Antibiotics are a special category of antimicrobial drugs that underpin modern medicine as we know it: if they lose their effectiveness, key medical procedures (such as gut surgery, caesarean sections, joint replacements, and treatments that depress the immune system, such as chemotherapy for cancer) could become too dangerous to perform. Most of the direct and much of the indirect impact of AMR [anti-microbial resistance] will fall on low and middle‑income countries. It does not have to be this way. ..  The economic impact is also already material. In the US alone, more than two million infections a year are caused by bacteria that are resistant to at least first-line antibiotic treatments, costing the US health system 20 billion USD in excess costs each year.\” 

This is from the report \”Tackling Drug-Resistant Infections Globally: Final Report and Recommendations,\” from the Review on Antimicrobial Resistance that was set up by the UK government, funded by the Wellcome Trust and the UK Department of Health, and chaired by Jim O’Neill (who was chief economist at Goldman Sachs for many years and is known as the originator of the acronym \”BRICs\” to refer to the emerging economies of Brazil, Russia, India, and China.) Background reports are supporting documentation for the report are available here.

For economists, antibiotic resistance falls into the analytical category of collective action problems, which are situations where economic actors in pursuit of private gain have no incentive to take a social cost into account. Problems of air and water pollution can fall into this this category. In the case of antibiotics, they clearly help many sick people and can help livestock gain weight, too. But those using antibiotics for private gain have no incentive to take into account that when they are commonly used, resistance to them evolves in a way that can make them less effective, or just plain ineffective. The report(on p. 16)  includes a discussion of the issue of antibiotic resistance in the terms economists prefer to use: externalities, imperfect information, and public goods.

The policies to address this issue are conceptually straightforward. Over the longer-term, provide incentives for companies to do research and development that can lead to new antibiotics (as well as other methods of fighting bacterial infections). Given that many existing antibiotics are off-patent and available in cheap generic versions, and also given that doctors might prefer to hold fancy new antibiotics in reserve unless or until the current versions don\’t work, trying to create new antibiotics may not look like a very encouraging market to pursue without some additional policy steps. But in the shorter-term, the policies need to be about reducing the overly casual use of antibiotics. If antibiotics are used only when really needed, then the problem of antibiotic resistance can be mitigated. Here are a few of the steps along these lines that jumped out at me from the report.

1) In the past, many doctors have prescribe antibiotics on the \”it can\’t hurt\” philosophy, and while antibiotics are unlikely to hurt that particular patient, the broader social problem of antibiotic resistance can indeed hurt. Thus, one set of policies would encourage doctors to prescribe antibiotics only when really needed. \”One study showed that in Belgium, campaigns to reduce antibiotic use during the winter flu season, resulted in a 36 percent reduction in prescriptions. Over 16 years, the cumulative savings in drug costs alone amounted to around 130 Euros (150 USD) per Euro spent on the campaign.\”  The key point here is that antibiotics only work against bacterial infections, and do nothing at all against viruses. The report points out that diarrhoeal illness kills about 1.1 million people per year in low and middle-income countries. Howver, about \”70 percent of episodes of diarrhoeal illness are caused by viral, rather than bacterial infections, against which antibiotics are ineffective – and yet antibiotics will frequently be used as a treatment.\”

Perhaps the most important development here would be rapid diagnostic tools, so that doctors could tell more or less in real time–or perhaps within a few hours–if an infection is bacterial and which specific bacteria is involved. This technology would mean that antibiotics could be used much less and targeted much better. As the report notes, that is not what happens now.

\”When doctors and other medical professionals decide whether to prescribe an antibiotic, they usually use so-called ‘empirical’ diagnosis: they will use their expertise, intuition and professional judgement to ‘guess’ whether an infection is present and what is likely to be causing it, and thus the most appropriate treatment. In some instances, diagnostic tools are used later to confirm or change that prescription. This process has remained basically unchanged in decades: most of these tests are lab-based, and would look familiar to a doctor trained in the 1950s, using processes that originated in the 1860s. Bacteria must be cultured for 36 hours or more to confirm the type of infection and the drugs to which it is susceptible. An acutely ill patient cannot wait this long for treatment, and even when the health risks are not that high, most doctors’ surgeries and pharmacies are under time, patient and financial pressure, and must address patients’ needs much faster.\”

2) Take public health measures to avoid people getting sick in the first place, so that antibiotics are less-needed for that reason. Especially in developing countries, major steps to reduce disease include better sanitation and clean water, along with vaccination campaigns. In developed countries, a main focus should be to reduce infections that arise in health care settings: \”Across developed countries, between seven and 10 percent of all hospital inpatients will contract some form of healthcare‑associated infection (HCAI), a figure that rises to one patient in three in intensive care units (ICUs). These levels of incidence are even higher in low and middle-income settings,
where healthcare facilities can face extreme constraints, sometimes as fundamental as access to running water for cleaning and handwashing.\”

3) Dramatically reduce the use of antibiotics in agriculture, where they are often used not just to treat animals who are sick, but as a sort of all-purpose aid to keep animals from falling sick and to help them gain weight.  These antibiotics often work into the environment–say, through disposal of animal waste products–and thus spur bacteria to become resistanc. \”The quantity of antibiotics used in livestock is vast, and often includes those medicines that are important for humans. In the US, for example, of the antibiotics defined as medically important for humans by the FDA, over 70 percent of the total volume used (by weight) are sold for use in animals. Many other countries are also likely to use more antibiotics in agriculture than in humans but they do not even hold or publish the information.\”

Moving ahead with these kinds of policy steps should be an urgent priority. A family of bacteria resistant to the antibiotics usually saved for a last resort has recently been found in a US patient. (The scientific article on this discovery in the journal Antimicrobial Agents and Chemotherapy is available here.)

For two previous posts on antibiotic resistance, see:

Homage: Like many others, I suspect, I ran across this particular report on antibiotic resistance  because of a cover story in the Economist magazine of May 21, 2016: the Economist leader is here; the more detailed article here.