Accessing the Financial System

When it comes to processing transactions and borrowing money, the financial system works beautifully for my family. The bank doesn\’t charge us for depositing checks or withdrawing money, and we have enough money in the bank to qualify for a \”free\” checking account. The annual fee for our credit cards is smaller than the benefits they offer in terms of airline tickets, hotel rooms, and money back, so that as long as we make the payments on time, we come out ahead each year.  Getting a  home mortgage or a car loan is straightforward. If I needed a serious chunk of emergency cash, I could take out a home equity loan.

But for many low-income Americans, the financial system imposes additional costs that make it a harder for households that are already struggling to make ends meet. If they don\’t have a bank they pay for cashing checks and they pay for money orders. If they do have a bank account or credit cards, they often end up paying overdraft fees at the bank or late fees to the credit card company. Getting a home loan or a car loan is somewhere from difficult-and-costly to impossible. If they need short-term cash, they end up turning to pawn shops, payday loans. The Council of Economic Advisers offers some background on these issues in its June 2016 Issue Brief, \”Financial Inclusion in the United States.\” 

As a starting point, here are a couple of figures showing the trends in whether households have a \”transaction account\”–basically, a bank account. The share of households with a bank account is rising, but for families in the lower income percentiles, or for families with lower education levels (of course, these are often the same families), the chance of having a bank account remains substantially lower.


These families face out-of-pocket costs when dealing with the financial system. 

The unbanked pay anywhere from 1 to 5 percent in fees to cash a check (depending in part on whether it is a paycheck or a government check since the latter come with lower risk for the check-casher). At an annual salary of $22,000 (the average for unbanked households), such fees can total over $1,000 a year in extra costs for unbanked households. …

Many of the households that fall into this category have impaired credit histories and would fall into the “subprime” category. The products and services that these individuals may obtain from both bank and non-bank providers typically include money orders, check cashing, remittances, payday loans, auto title loans, and pawn shop loans (collectively known as small-dollar credit). …

Unbanked and underbanked households also face additional costs due to their reliance on other, non-mainstream financial services, such as payday loans (used by roughly 5 percent of households in 2013) and auto title loans (used by roughly 0.6 percent of households in 2013), among other forms of so-called small dollar credit. These products’ costs can be quite substantial—anywhere from $10 to $30 in extra costs per $100 borrowed in the case of payday loans. 

My main quibble with the report is its use of the currently fashionable term of \”financial inclusion.\” As the text of the report notes, the underlying issues here are more prosaic and obvious than a broad term like \”inclusion\” might suggest.

Despite the many benefits to financial inclusion, there are a number of barriers. One potential barrier is the upfront cost associated with opening a bank account, including fees associated with opening an account, minimum balance requirements, and gathering the necessary documentation and proof of identity, as well as the opportunity costs associated with opening an account or traveling to a bank branch to conduct business … 

There are ways to address some of these issues, like the widespread shift toward having government checks paid directly to bank accounts, rather than delivered through the mail, which can create a tie to a bank for those who have not previously had such a connection.  But most of the new financial technology firms or smartphone applications that one reads about require overcoming these more basic hurdles.

In my own thinking, there are two issues here that are somewhat separable. One is being able to carry out transactions like receiving checks or paying bills without needing to pay much in transactions costs for doing so. The other question, and it\’s a harder question, is about what happens to low-income Americans who need short-term loans. Because of low and sporadic income levels, they are not good candidates for conventional loans, and thus end up turning to the \”small-dollar credit\” options above. Not only do these short-term lending options have relatively high up front costs, if they are not repaid on time they often impose additional costs. But what\’s to be done? As the report notes:

Despite the potentially adverse impacts that arise from a reliance on unconventional sources of small dollar credit, these products provide a source of funds for households who might not otherwise be able to cover such crucial but often unanticipated expenses as emergency medical treatment, funeral and burial preparations, and urgent home or automobile repairs, or sometimes even to cover regular expenses. These types of products often take the place of conventional accounts. … Moreover, some households depend on access to small-dollar credit not just for big, one-time expenditures but also for covering general, day-to-day living expenses, such as rent and utility payments. One survey found that 2 in 5 title loan users report using the borrowed funds for rent or utilities while about a quarter of borrowers used the loan for medical expenses or car repairs. 

It\’s easy enough to deplore how these small-dollar credit institutions operate, and how people start by taking out one loan and end up trapped in a web of ever-growing debts. (Incidentally, the public sector isn\’t innocent here, either, with a growing practice in recent years of imposing ever-greater fines, and then additional fines for not paying the earlier fines on time, which can weigh heavily on low-income families.) I\’ve written here in the past about the issues with payday loans, in particular (for example, here, here and here).

But those with low incomes who are facing a sudden need for cash–an emergency medical costs, or a car repair that\’s the difference between getting to work or not, or the possibility of having their power and water cut off, or being evicted from an apartment–may not have any especially appealing options. Before acting to limit or reduce the options that they do have, like the recent rules that seek to limit payday lending, it\’s useful to think about what is going to happen to some of the people who can\’t get that loan they need.

Aaron Klein of the Brookings Institution offers a useful primer on these issues in \”Understanding non-prime borrowers and the need to regulate small dollar and \”payday\” loans,\” a short, readable paper published on-line on May 19, 2016.

Some Economics of Stop and Frisk

If almost every time the police stopped and frisked someone, they found an illegal weapon or drugs, or identified a wanted criminal, then complaints about the practice would ring hollow. On the other hand, if the police almost never found evidence of a crime during a stop and frisk, then complaints about the practice would take on a sharpened urgency.

Other evidence would be nice, too. It would be nice to know on what grounds the police are making decisions to stop and frisk, and whether some reasons for stop and frisk are more likely or less likely to lead to evidence of a crime. It would be nice to know the extent to which the well-known racial differences in stop-and-frisk are related to the practice occurring more in higher poverty, higher crime areas, which also have a racial imbalance. It would be nice to have some evidence on whether stop-and-frisks are more likely to lead to evidence of a crime for whites or blacks.

Sharad Goel, Justin M. Rao And Ravi Shroff offer some evidence and analysis on these kinds of questions in their research paper \”Precinct Or Prejudice? Understanding Racial Disparities In New York City’s Stop-And-Frisk Policy,\” which appeared earlier this year in  the Annals of Applied Statistics (2016, 10:1, 365–394).

The authors have data on 2.9 million stops conducted by New York City police officers between 2008 and 2012. They write:

\”Following a stop, officers complete a UF-250 stop-and-frisk form, recording various aspects of the stop, including demographic characteristics of the suspect, the time and location of the stop, the suspected crime and the rationale for the stop (e.g., whether the suspect was wearing clothing common in the commission of a crime). … After an individual is stopped, officers may conduct a frisk (i.e., a quick patdown of the person’s outer clothing) if they reasonably suspect the individual is armed and dangerous; officers may additionally conduct a search if they have probable cause of criminal activity. Frisks and searches occur in 56% and 9% of cases, respectively. An officer may decide to make an arrest (6% of instances) or issue a summons (6% of instances), all of which is recorded on the UF-250 form.\”

Of course, it\’s sensible to be skeptical about the quality of this evidence. For example, one might raise questions about how frequently or accurately these UF-250 forms are filled out. One answer to this concern is that because of past court cases, the NYPD has some explicit emphasis on filling out the forms, and filling them out accurately. Also, with data on several million forms, one should be able to learn something, even if lessons should be drawn with appropriate caution.

The researchers focus most of their discussion in this study on the 760,000 cases where the reason for the stop was suspicion of criminal possession of a weapon. This group of stops is useful to study because it\’s the largest single reason for such stops, and because the data shows whether a weapon was actually found, or not, which focuses on a specific crime, rather than jumbling all crimes together. The authors look at data mostly from 2009-2010, and calculate what factors listed on the UF-250 form–including personal characteristics of the suspect, the specific factors that the officer observed that led to the stop, and the location as determined by the police precinct– make it more or less likely that a weapon was actually found.  The result is a big messy statistical calculation, which for 2009-2010 includes 301,000 stops and 7,705 different variables (the large number of variables is because they look at a bunch of potential variables both individually and in how the variables might interact with each other).

Here\’s the payoff:  The authors can use the answers from their calculations on the first few years of the data to predict how likely any given stop-and-frisk looking for criminal possession of a weapon was to find such a weapon in the 2011-2012 data. For example, if an officer in a certain precinct stopped someone for criminal possession of a weapon because the person was acting furtively, and wearing suspicious clothing at a certain time of day in a certain  precinct, what was the chance (based on data from the earlier time period) that the person actually had a weapon? If an officer in another precinct stopped someone for criminal possession of a weapon who was near a crime scene, fitted a witness report, and changed direction when the officer came into view, what is the chance that that person actually had a weapon?  The researchers write (citations, footnotes, and references to figures omitted for readability):

We find that in 43% of the approximately 300,000 CPW [criminal possession of a weapon] stops between 2011 and 2012, there was at most a 1% chance of finding a weapon on the suspect. We note that the recovered weapons are typically knives, with guns constituting approximately 10% of found weapons. …

In particular, consistent with past results, the overall hit rates for blacks and Hispanics (2.5% and 3.6%, resp.) are considerably lower than for whites (11%). In other words, these results indicate that when blacks and Hispanics are stopped, it is typically on the basis of less evidence than when white suspects are stopped. Moreover, while 49% of blacks stopped under suspicion of CPW [criminal  possession of a weapon] have less than a 1% chance of in fact possessing a weapon, the corresponding fraction for Hispanics is 34%, and is just 19% for stopped whites. Thus, if we equate reasonable suspicion with a particular probability threshold (say 1%), a far greater fraction of stops of blacks and Hispanics are unwarranted than are stops of whites. … 

However, … whites and minorities are typically stopped in different contexts, and so differing hit rates may not be the result of racial bias. Indeed, as we discuss below, stop-and-frisk is an extremely localized tactic, heavily concentrated in high-crime, predominantly black and Hispanic areas, and so lower tolerance for suspicious activity (and hence lower hit rates) in these areas could account for the racial disparity. …
[T]here is an almost one-to-one correspondence between areas with heavy use of stop-and-frisk. While this is a natural and possibly effective policing strategy, a consequence of the tactic is that individuals who live in high-crime areas, but who are not themselves engaged in criminal activity, bear the costs associated with being stopped. … [T]hese high-crime areas are overwhelmingly black and Hispanic. Accordingly, the cost of stop-and-frisk is largely shouldered by minorities. … [W]e see that the racial composition of stopped individuals is similar to the racial composition of the neighborhoods in which stop-and-frisk is heavily employed. Thus, the striking racial composition of stopped CPW [criminal possession of a weapon] suspects (61% are black, 30% are Hispanic and 4% are white) appears at least qualitatively attributable to selective use of stop-and-frisk in minority-heavy areas …

In a more detailed analysis of the data, they find that the differing use of stop-and-frisk across neighborhoods accounts for part of gap by which blacks and Hispanics are stopped and frisked more than whites, but not for all of it. 

Another intriguing aspect of the study is that it can answer the question of what reasons–and remember, these are the reasons given by the police themselves–are more likely to uncover a concealed weapon. The UF-250 report lists 18 specific \”stop circumstances\” (there\’s also a category for \”other,\” which they ignore). The 18 circumstances are: suspicious object, fits description, casing, acting as lookout, suspicious clothing, drug transaction, furtive movements, actions of violent crime, suspicious bulge, witness report, ongoing investigation, proximity to crime scene, evasive response, associating with criminals, changed direction, high crime area, time of day, sights and sounds of criminal activity.

The question is whether some of these are more likely, as revealed by the actual evidence, to lead to actual discovery of a criminal concealed weapon than others. The authors look at these 18 factors together with each of the 77 police precincts and also whether the stop-and-frisk happened at a public housing location, a transit stop, or elsewhere. Notice that this is not an exercise in 20:20 hindsight: instead, it\’s looking at the circumstances that police actually reported seeing at the time, and then seeing what worked. Basically, they find that three  of the circumstances were good predictors of a criminal concealed weapon: suspicious object, sights and sounds of criminal activity, suspicious bulge. The other 15 circumstances were either barely connected to finding a concealed weapon, or not connected at all.

An obvious policy choice suggests itself here. The NYPD are often using stop-and-frisk to look for weapons based on the policy observing certain circumstances that are quite unlikely to be associated with a concealed weapon. If the NYPD no longer stopped people on suspicion of  suspicion of criminal possession of a weapon based on furtive movements, acting as a lookout, changing direction, and many of the other reasons given, it could focus on the circumstances that are more likely to actually end up finding a concealed weapon. The authors write:

In particular, we show that one can recover 50% of weapons by conducting only the 6% of CPW [criminal possession of a weapon] stops with the highest ex ante hit rate, and 90% of weapons by conducting 58% of CPW stops. These ex ante hit rates are based only on information observable to officers prior to the stop decision, and so it is at least in theory possible to implement such a strategy. Further, since low hit rate stops disproportionately involve blacks and Hispanics, optimizing for weapons recovery would simultaneously bring more racial balance to stop-and-risk.To facilitate adoption of such strategies by police departments, we develop stop heuristics that approximate our full statistical model via a simple scoring rule. Specifically, we show that with a rule consisting of only three weighted stop criteria, one can recover the majority of weapons by conducting 8% of stops.

An obvious concern about these results is that perhaps stopping and frisking people for criminal possession of a weapon is just an excuse, but it\’s nonetheless a useful excuse for reducing the rate of crime. The authors discuss the point this way:

A possible objection to our approach is that even for CPW [criminal possession of a weapon] stops, recovering weapons is not the only—or perhaps not even the primary—goal of the police. Officers, for example, may simply consider stops a way to advertise their presence in the neighborhood or a means to collect intelligence on criminal activity in the area, regardless of how many weapons are directly recovered. Stops conducted for these alternative motives could quite plausibly deter individuals from carrying weapons and might lead to information helpful in solving cases, both of which presumably would lower the incidence of violent crime over time. In the instances we consider, however, the explicitly stated reason for a stop is suspicion of criminal possession of a weapon, not one of the various other reasons that may or may not withstand legal or public scrutiny, and so it seems most natural to consider whether individuals were in fact likely to be carrying weapons. Moreover, as we have previously noted, simply because a strategy may be effective does not make it legal. … A related worry is that “criminal possession of a weapon” is a catchall category for a variety of criminal offenses, and so by focusing on whether a weapon was found, we underestimate the value of a stop. Addressing this issue, we observe that our results are qualitatively similar if we instead use arrests [for any reason] as the outcome variable, mitigating cause for concern.

The authors instead draw the conclusion that stop-and-frisk can be a useful tool for police work, but when it comes to criminal possession a weapon, it\’s a tool that\’s being overused. They conclude:

By focusing on the relatively small number of high hit rate situations—situations that can be reliably identified via statistical analysis—one may be able to retain many of the benefits of stop-and-frisk for crime prevention while mitigating constitutional violations. This observation has the potential to not only improve New York City’s stop-and-frisk program, but could also aid similar policies throughout the country.

The Big Question: Has Robust Growth Deserted Us?

Aaron Steelman and John A. Weinberg provide a useful overview of the biggest long-term economic question facing the US economy–and arguably the economies of the other high-income countries of the world as well–in their essay \”A “New Normal”? The Prospects for Long-term Growth in the United States.\”  The essay appears in the just-released 2015 Annual Report of the Federal Reserve Bank of Richmond.

The essay has a nice readable overview of how economists have thought about the fundamental determinants of growth from the work of Robert Solow up through modern economists like Paul Romer, Charles Jones, and others. Here, I\’ll focus instead on how some prominent thinkers have phrased the current challenges of long-run growth. Here\’s the stage-setter:

\”To measure improvement in average standards of living, growth of GDP per capita is the standard yardstick. The post-war average of 3.4 percent overall growth translated to an average growth rate per capita of about 2.1 percent. … Since 2010, the U.S. economy has grown at a rate of roughly 2.1 percent annually, which translates to an average growth rate per capita of about 1.3 percent, both well below the post-World War II rates prior to the Great Recession and, perhaps more notably, far below what has been seen in “catch-up” periods following previous significant downturns.\”

What\’s the case for pessimism that the US is going to experience a long-term slowdown of long-run growth? The essay brings out the themes by discussing the work of two economists: Tyler Cowen and Robert Gordon.

\”As Tyler Cowen, an economist at George Mason University, put it in his 2011 book The Great Stagnation, the United States has “built social and economic institutions on the expectation of a lot of low-hanging fruit, but that fruit is mostly gone” and has been since roughly the early 1970s. In particular, he identifies three types of increasingly scarce “fruit”: free land, technological breakthroughs, and smart but relatively uneducated kids. Regarding the first, until the beginning of the 20th century, free and fertile American land was plentiful and not only “did the United States reap a huge bounty from the free land (often stolen from Native Americans, one should not forget), but abundant resources helped the United States attract many of the brightest and most ambitious workers from Europe,” Cowen writes. “Taking in these workers, and letting them cultivate the land, was like plucking low-hanging fruit.” Second, Cowen also sees technological innovation, and especially breakthroughs, as slowing. “Life is better and we have more stuff, but the pace of change has slowed down compared to what people saw two or three generations ago.” Third, in 1900, a very small percentage of Americans graduated from high school, while estimates of high school completion today range from roughly 75 percent to 90 percent. … 

\”In addition to a slowing rate of innovation, [Robert] Gordon, as noted before, argues that the U.S. economy faces four big headwinds. First, there’s rising income inequality, which has reduced the share of economic gains going to the middle and working classes and with it their disposable income and purchasing power. Second, growth in educational attainment as measured by years of schooling completed has slowed and, among some parts of the population, decreased since 1970. In addition, the quality of primary and secondary education has become more stratified and the costs of higher education has increased. Such trends in education are themselves a contributor to the first headwind, growing income inequality. Third, the United States is experiencing significant demographic changes, most significantly many baby boomers are reaching traditional retirement age. That has reduced the number of hours worked per person. In addition, labor force participation among people who have not yet reached retirement age has dropped. Fourth, federal, state, and local governments face mounting debt, in large measure due to the aging of the population, as spending on “entitlement” programs such as Social Security and Medicare increases and pension obligations to public-sector employees grow. Gordon identifies two additional headwinds, which he thinks could be barriers to growth, though they are hard to quantify: “globalization,” which could add to growing income inequality, and global warming and other environmental issues, which could require significant resources to address.\”

That\’s a formidable list of concerns. But it\’s worth remembering that the term \”secular stagnation\” was first used by economists back in the 1930s, and the main concerns at the time included a lack of investment because of–at that time–lack of invention, lack of new resources and land, and slow population growth. But relatively faster growth arrived in the decades that followed, nevertheless. For example, it\’s notoriously difficult to predict whether and when new technologies will arrive, or how they will be commercialized. The essay notes:

[E]conomic historian Joel Mokyr …  argues that there are many areas of science in which significant discoveries seem promising, among them molecular microbiology, astronomy, nanochemistry, and genetic engineering. And while it is true that there is no automatic mechanism that turns better science into improved technology, “there is one reason to believe that in the near future it will do so better and more efficiently than ever before. The reason is access.” Meaning, searching for vast amounts of information has become fast, easy, and nearly costless for researchers. Not only is the era of “Big Data” here but the ability to parse through the most arcane of data is no longer burdensome for people working on the frontiers of knowledge. On the question of whether all the low-hanging fruit has been picked, Mokyr argues that the analogy is flawed. As he puts it, science “builds taller and taller ladders, so we can reach the upper branches, and then the branches above them.” In other words, when a technological solution for a problem is found it often creates a new problem, which creates a new problem, and so on. “Each solution perturbs some other component in the system and sows the seed of more needs; the ‘demand’ for new technology is thus self-sustaining.”

Some of the issues that may be leading to slower growth are hard to move with policy tools, like lower birth rates and an aging population. But other factors affecting growth might be shifted. When I look at the state of US K-12 education, and higher education for that matter, it seems to me that the United States has substantial possibilities for gains in cognitive and noncognitive skills. The US could use a dramatic rise in research and development spending.  Infrastructure investments could have substantial payoffs, although I\’ve argued on this blog that while fixing roads and bridges is fine, US economic growth in the 21st century is more likely to rely on information-related and energy infrastructures, along with planes and trains and ships, rather than just pavement and asphalt. The US economy has seen a decline over several decades in the rate of new-business startups, which in turn hinders job creation.

The optimistic case for at least a modest resurgence of long-run growth often starts by pointing out that when a major financial crises and a Great Recession hit at the same time, the recovery is likely to be slow, because so many firms and households all need to address the overload of debt they had previously built up. It then points to the possibilities of new technologies, intermixing and spreading in an interconnected global economy. And the optimistic view hopes for a supportive policy environment along the way. For example, this is more-or-less the position taken by Fed chair Janet Yellen in a speech earlier this week:

There is some evidence that the deep recession had a long-lasting effect in depressing investment, research and development spending, and the start-up of new firms, and that these factors have, in turn, lowered productivity growth. With time, I expect this effect to ease in a stronger economy. I also see no obvious slowdown in the pace or the potential benefits of innovation in America, which likewise may bear fruit more readily in a stronger economy. In the meantime, it would be helpful to adopt public policies designed to boost productivity. Strengthening education and promoting innovation and investment, public and private, will support longer-term growth in productivity and greater gains in living standards.

One other angle on the question of the growth slowdown involves what is being measured: specifically, are many people experiencing gains in their standard of living that aren\’t well-measured in the economic statistics? Steelman and Weinberg quote Angus Deaton on this point:

I…challenge the proposition that the information revolution and its associated devices do little for human well-being. Many have documented the importance of spending time and socializing with friends and family, but this is exactly the feature of everyday life that the new communication methods work to enhance. All of us can remain in touch with our children and friends throughout every day, videoconferencing is essentially free, and we can cultivate close relationships with people who live thousands of miles away. When my parents said good-bye to relatives and friends who left Scotland to look for better lives in Canada and Australia, they never expected to see or talk to them again, except perhaps for a brief and astronomically expensive phone call when someone died. Today, we often do not even know where people are physically located when we work with them, talk to them, or play with them. We can also enjoy the great human achievements of the past and the present, cheaply accessing literature, music, and movies at any time and in any place. That these joys are not captured in growth statistics tells us about the growth statistics, not about the technology. 

My guess is that economic statistics have often understated the actual gains to individuals in the past, as well. The widespread presence of television, telephone, radio, and even the humble photograph and book allowed people to be in touch with others and with what Deaton calls \”great human achievements of the past and present\” in new ways, too. But the new information technologies have certainly ramped up these possibilities to a whole new level.

A final broad issue underlying economic growth is whether we wish to be a society that expects change, embraces change, and is designed to facilitate the dislocations of persistent change–or not.

When Finance Becomes Self-Referential

The Bank of International Settlements has just published a group of working papers based on its annual conference held in June 2015. John Kay delivered the keynote address on the subject: \”Finance is just another industry,\” which appears as part of \”Towards a “new normal”in financial markets?\” which include the speech by John Kay and essays by Jaime Caruana and Paul Tucker, BIS Papers No 84, May 2016.

Kay\’s keynote address is lively to read and full of vivid examples and metaphors. For me, a main theme is that what most of us think about \”finance\” can be divided into two parts: the part that directly helps actual people and firms and governments operate in the real world, and the part where the financial sector becomes self-referential and starts to interact largely with itself. Of course, this distinction is more like a spectrum, where one category shades into the other, rather than a black-and-white binary distinction. But the distinction is useful nonetheless. Here are some thoughts from Kay:

On the contributions that finance makes to society.

Finance can contribute to society and the economy in four principal ways. First, the payments system is the means by which we receive wages and salaries, and buy the goods and services we need; the same payments system enables business to contribute to these purposes. Second, finance matches lenders with borrowers, helping to direct savings to their most effective uses. Third, finance enables us to manage our personal finances across our lifetimes and between generations. Fourth, finance helps both individuals and businesses to manage the risks inevitably associated with everyday life and economic activity. These four functions – the payments system, the matching of borrowers and lenders, the management of our household financial affairs, and the control of risk – are the services which finance does, or at least can, provide. The utility of financial innovation is measured by the degree to which it advances the goals of making payments, allocating capital, managing personal finances and handling risk. Most people who work in finance are concerned with the first two of these functions. They operate the payments system, they help households with their personal finances. They are not aspiring Masters of the Universe. Mostly, they earn modest salaries. Half of the employees of Barclays Bank earn less than £25,000 ($40,000) per year. But Barclays also employs 530 “code staff” – people with executive functions – who earn an average of £1.3m each, and there are 1443 who earn more than £500,000 ($800,000). It is likely that “the one per cent” in Barclays Bank earn a total approaching half of the total wage and salary bill of the bank. Most of these people are employed in wholesale rather than retail finance. Their activities relate mainly to the other objectives of the financial system – capital allocation and risk management.

Although one of the functions of the financial sector is that it \”matches lender with borrowers,\” Kay points out that a substantial part of investment–whether it\’s a physical capital investment or an investment in research and development and firm-specific skills for employees–is financed internally by firms out of their profits, not as a result of financial sector interactions.

ExxonMobil is both the most profitable company in the United States and the biggest private investor. Massive expenditure on exploration and development and on infrastructure is necessary every year to exploit new energy resources and bring oil products to market. In 2013, ExxonMobil invested $20 billion. That figure was in itself a significant fraction of total investment by US corporations. Exxon got all of that money from its own internal resources. In 2013, ExxonMobil spent $16 billion buying back its own shares, in addition to the $11 billion the company paid in dividends to shareholders. The company’s short- and long-term debt levels were virtually unchanged. It raised no net new capital at all. Nor was 2013 an exceptional year. Over the five years up to and including 2013, the activities of the corporation generated almost $250 billion in cash, around twice the amount it invested. ExxonMobil did not raise any new capital in these five years either. Instead the company spent around $100 billion buying back securities it had previously issued. Oil exploration, production and distribution are capital-intensive. Many modern companies need very little capital. The stock market capitalisation of Apple – the total market value of the company’s shares – is over $500 billion. Although the corporation has large cash balances – currently around $150 billion – it has few other tangible assets. Manufacturing is subcontracted. Apple is building a new headquarters building in Cupertino at an estimated cost of $5 billion which will be its principal physical asset. The corporation currently occupies a variety of properties in that town, some of them owned, others leased. The flagship UK store on London’s Regent Street is jointly owned by the Queen and the Norwegian sovereign wealth fund. Operating assets therefore represent only around 3% of the estimated value of Apple’s business. 

At some point, Kay argues, financial markets can become self-referential. For example,

We need a finance sector to manage our payments, finance our housing stock, restore our infrastructure, fund our retirement and support new business. But very little of the expertise that exists in the finance industry today relates to the facilitation of payments, the provision of housing, the management of large construction projects, the needs of the elderly or the nurturing of small businesses. The process of financial intermediation has become an end in itself. The expertise that is valued is understanding of the activities of other financial intermediaries. That expertise is devoted not to the creation of new assets, but to the rearrangement of those that already exist. High salaries and bonuses are awarded not for fine appreciation of the needs of users of financial services, but for outwitting competing market participants. In the most extreme manifestation of a sector which has lost sight of its purposes, some of the finest mathematical and scientific minds on the planet are employed to devise algorithms for computerised trading in securities which exploit the weaknesses of other algorithms for computerised trading in securities. …

Nothing illustrates the self-referential nature of the dialogue in modern financial markets more clearly than this constant repetition of the mantra of liquidity. End users of finance – households, non-financial businesses, governments – do have a requirement for liquidity, which is why they hold deposits and seek overdraft or credit card facilities and, as described above, why it is essential that the banking system is consistently able to meet their needs. But these end users – households, non-financial businesses, governments – have very modest requirements for liquidity from securities markets. Households do need to be able to realise their investments to deal with emergencies or to fund their retirement, businesses will sometimes need to make large, lumpy investments, governments must be able to refinance their maturing debt. But these needs could be met in almost all cases if markets opened once a week – perhaps once a year – for small volumes of trade. … The need for extreme liquidity, the capacity to trade in volume (or at least trade) every millisecond, is not a need transmitted to markets from the demands of the final users of these markets, but a need, or a perceived need, created by financial market participants themselves. People who applaud traders for providing liquidity to markets are often saying little more than that trading facilitates trading – an observation which is true, but of very little general interest.

The questions becomes how to assure that the financial system is resilient and robust, so that when the masters of finance are chasing their own tails, the rest of the economy isn\’t upset. In any complex system, trying to assure that no component will ever fail is a foolish goal. Failures are going to happen; in the financial system, banks and other financial institutions are going to fail sometimes. The goal needs to be to create a system that is resilient when such failures occur. Here\’s Kay:

The organisational sociologist Charles Perrow has studied the robustness and resilience of engineering systems in different contexts, such as nuclear power stations and marine accidents.16 Robustness and resilience require that individual components of the system are designed to high standards. Demands for higher levels of capital and liquidity are intended to strengthen the component units of the financial system. But the levels of capital and liquidity envisaged are inadequate – laughably inadequate – relative to the scale of resources required to protect financial institutions against panics such as the global financial crisis. More significantly, resilience of individual components is not always necessary, and never sufficient, to achieve system stability. Failures in complex systems are inevitable, and no one can ever be confident of anticipating the full variety of interactions which will be involved. Engineers responsible for interactively complex systems have learnt that stability requires conscious and systematic simplification, modularity which enables failures to be contained, and redundancy which allows failed elements to be bypassed. None of these features – simplification, modularity, redundancy – were characteristic of the financial system as it had developed in 2008. On the contrary. Financialisation had greatly increased complexity, interaction and interdependence. Redundancy – as, for example, in holding capital above the regulatory minimum – was everywhere regarded as an indicator of inefficiency, not of resilience. 

BIS has also put the other papers delivered at its annual conference last summer up online. In particular, the paper by Andrew Lo, \”Moore\’s Law vs. Murphy\’s Law in the financial system: who\’s winning?\” dovetails nicely with some of the themes raised by Kay, and digs into some of the same issues. From the abstract of Lo\’s paper:

\”Breakthroughs in computing hardware, software, telecommunications and data analytics have transformed the financial industry, enabling a host of new products and services such as automated trading algorithms, crypto-currencies, mobile banking, crowdfunding and robo-advisors . However, the unintended consequences of technology-leveraged finance include firesales, flash crashes, botched initial public offerings, cybersecurity breaches, catastrophic algorithmic trading errors and a technological arms race that has created new winners, losers and systemic risk in the financial ecosystem. These challenges are an unavoidable aspect of the growi\”ng importance of finance in an increasingly digital society. Rather than fighting this trend or forswearing technology, the ultimate solution is to develop more robust technology capable of adapting to the foibles in human behaviour so users can employ these tools safely, effectively and effortlessly.\” 

Here are the rest of the links:

\”Mobile collateral versus immobile collateral,\” BIS Working Papers No 561
by Gary Gorton and Tyler Muir
Comments by Randall S Kroszner and Andrei Kirilenko

\”Expectations and investment,\” BIS Working Papers No 562
by Nicola Gennaioli, Yueran Ma and Andrei Shleifer
Comments by Philipp Hildebrand

\”Who supplies liquidity, how and when?\” BIS Working Papers No 563
by Bruno Biais, Fany Declerck and Sophie Moinas
Comments by Arminio Fraga and Francesco Papadia

\”Moore\’s Law vs. Murphy\’s Law in the financial system: who\’s winning?\” BIS Working Papers No 564
Andrew W Lo
Comments by Darrell Duffie and a written contribution by Benoît Coeuré

Update on Remittances

Remittances refers to money that emigrants send back to their country of origin, and in the context of global capital flows, they are a big deal. The report Migration and Remittances: Recent Developments and Outlookpublished by Global Knowledge Partnership on Migration and Development (KNOMAD) in April 2016, offers an overview of trends and patterns. (For the record, KNOMAD is funded by the governments of Germany, Sweden, and Switzerland, and administered by the World Bank.)

Consider four of the ways in which capital can move to the economies developing countries: 1) foreign direct investment (that is, having an ownership share in the investment); 2) private and equity; 3) official development assistance; and 4) remittances. Back in the mid-1990s, these four were all very roughly the same size, or at least within shouting distance of each other. But while official development assistance has risen a bit, the other three categories have all grown dramatically, as shown in the figure.

Remittances are about the same size as flows of private debt and equity. And while remittances haven\’t grown as fast as foreign direct investment, it has risen much more steadily and without the peaks and valleys. Remittances aren\’t driven by the fluctuations and trend-chasing that affect foreign direct investment and private investments in debt and equity. But remittances are affected by broad economic changes: for example, the fall in the price of oil means that remittances from Russia dropped, while the improvement in the US economy in the last few years has helped to raise remittances moving from the US to Latin America.But the

One reason for the rise in remittances is just that the total number of who have migrated to a different country has risen–although the rise in the total global number of migrants of about 50% in the last 20 years doesn\’t come close to explaining the rise in remittances during that time.

Another change is that it has become cheaper and easier over time for a worker to send funds back to his or her country of origin. The decline in the last few years is mild, but real. However, I suspect that if comparable statistics were available for the 1990s, the task of sending funds home would be considerably more complex and costly.

Remittances don\’t get as much attention as they deserve. We\’re often focused on the international capital flows connected with government, or with big global banks and businesses. But while remittances are very large overall at $433 billion in 2015, they mainly consist of relatively small-scale payments happening between family and community networks. In many cases, remittances provide the cash for ordinary families to start a small business or pay for extra education. The report points out that when there is a natural disaster in many countries, remittances to that country rise substantially as people around the world help out.

For a more detailed overview of this topic, I recommend Dean Yang\’s article on \”Migrant Remittances\” in the Spring 2011 issue of theJournal of Economic Perspectives as a starting point. (Full disclosure: I\’ve labored in the fields as Managing Editor of JEP since the first issue in 1987.)

The Collective Action Problem of Resistance to Antibiotics

\”We estimate that by 2050, 10 million lives a year and a cumulative 100 trillion USD of economic output are at risk due to the rise of drug-resistant infections if we do not find proactive solutions now to slow down the rise of drug resistance. Even today, 700,000 people die of resistant infections every year. Antibiotics are a special category of antimicrobial drugs that underpin modern medicine as we know it: if they lose their effectiveness, key medical procedures (such as gut surgery, caesarean sections, joint replacements, and treatments that depress the immune system, such as chemotherapy for cancer) could become too dangerous to perform. Most of the direct and much of the indirect impact of AMR [anti-microbial resistance] will fall on low and middle‑income countries. It does not have to be this way. ..  The economic impact is also already material. In the US alone, more than two million infections a year are caused by bacteria that are resistant to at least first-line antibiotic treatments, costing the US health system 20 billion USD in excess costs each year.\” 

This is from the report \”Tackling Drug-Resistant Infections Globally: Final Report and Recommendations,\” from the Review on Antimicrobial Resistance that was set up by the UK government, funded by the Wellcome Trust and the UK Department of Health, and chaired by Jim O’Neill (who was chief economist at Goldman Sachs for many years and is known as the originator of the acronym \”BRICs\” to refer to the emerging economies of Brazil, Russia, India, and China.) Background reports are supporting documentation for the report are available here.

For economists, antibiotic resistance falls into the analytical category of collective action problems, which are situations where economic actors in pursuit of private gain have no incentive to take a social cost into account. Problems of air and water pollution can fall into this this category. In the case of antibiotics, they clearly help many sick people and can help livestock gain weight, too. But those using antibiotics for private gain have no incentive to take into account that when they are commonly used, resistance to them evolves in a way that can make them less effective, or just plain ineffective. The report(on p. 16)  includes a discussion of the issue of antibiotic resistance in the terms economists prefer to use: externalities, imperfect information, and public goods.

The policies to address this issue are conceptually straightforward. Over the longer-term, provide incentives for companies to do research and development that can lead to new antibiotics (as well as other methods of fighting bacterial infections). Given that many existing antibiotics are off-patent and available in cheap generic versions, and also given that doctors might prefer to hold fancy new antibiotics in reserve unless or until the current versions don\’t work, trying to create new antibiotics may not look like a very encouraging market to pursue without some additional policy steps. But in the shorter-term, the policies need to be about reducing the overly casual use of antibiotics. If antibiotics are used only when really needed, then the problem of antibiotic resistance can be mitigated. Here are a few of the steps along these lines that jumped out at me from the report.

1) In the past, many doctors have prescribe antibiotics on the \”it can\’t hurt\” philosophy, and while antibiotics are unlikely to hurt that particular patient, the broader social problem of antibiotic resistance can indeed hurt. Thus, one set of policies would encourage doctors to prescribe antibiotics only when really needed. \”One study showed that in Belgium, campaigns to reduce antibiotic use during the winter flu season, resulted in a 36 percent reduction in prescriptions. Over 16 years, the cumulative savings in drug costs alone amounted to around 130 Euros (150 USD) per Euro spent on the campaign.\”  The key point here is that antibiotics only work against bacterial infections, and do nothing at all against viruses. The report points out that diarrhoeal illness kills about 1.1 million people per year in low and middle-income countries. Howver, about \”70 percent of episodes of diarrhoeal illness are caused by viral, rather than bacterial infections, against which antibiotics are ineffective – and yet antibiotics will frequently be used as a treatment.\”

Perhaps the most important development here would be rapid diagnostic tools, so that doctors could tell more or less in real time–or perhaps within a few hours–if an infection is bacterial and which specific bacteria is involved. This technology would mean that antibiotics could be used much less and targeted much better. As the report notes, that is not what happens now.

\”When doctors and other medical professionals decide whether to prescribe an antibiotic, they usually use so-called ‘empirical’ diagnosis: they will use their expertise, intuition and professional judgement to ‘guess’ whether an infection is present and what is likely to be causing it, and thus the most appropriate treatment. In some instances, diagnostic tools are used later to confirm or change that prescription. This process has remained basically unchanged in decades: most of these tests are lab-based, and would look familiar to a doctor trained in the 1950s, using processes that originated in the 1860s. Bacteria must be cultured for 36 hours or more to confirm the type of infection and the drugs to which it is susceptible. An acutely ill patient cannot wait this long for treatment, and even when the health risks are not that high, most doctors’ surgeries and pharmacies are under time, patient and financial pressure, and must address patients’ needs much faster.\”

2) Take public health measures to avoid people getting sick in the first place, so that antibiotics are less-needed for that reason. Especially in developing countries, major steps to reduce disease include better sanitation and clean water, along with vaccination campaigns. In developed countries, a main focus should be to reduce infections that arise in health care settings: \”Across developed countries, between seven and 10 percent of all hospital inpatients will contract some form of healthcare‑associated infection (HCAI), a figure that rises to one patient in three in intensive care units (ICUs). These levels of incidence are even higher in low and middle-income settings,
where healthcare facilities can face extreme constraints, sometimes as fundamental as access to running water for cleaning and handwashing.\”

3) Dramatically reduce the use of antibiotics in agriculture, where they are often used not just to treat animals who are sick, but as a sort of all-purpose aid to keep animals from falling sick and to help them gain weight.  These antibiotics often work into the environment–say, through disposal of animal waste products–and thus spur bacteria to become resistanc. \”The quantity of antibiotics used in livestock is vast, and often includes those medicines that are important for humans. In the US, for example, of the antibiotics defined as medically important for humans by the FDA, over 70 percent of the total volume used (by weight) are sold for use in animals. Many other countries are also likely to use more antibiotics in agriculture than in humans but they do not even hold or publish the information.\”

Moving ahead with these kinds of policy steps should be an urgent priority. A family of bacteria resistant to the antibiotics usually saved for a last resort has recently been found in a US patient. (The scientific article on this discovery in the journal Antimicrobial Agents and Chemotherapy is available here.)

For two previous posts on antibiotic resistance, see:

Homage: Like many others, I suspect, I ran across this particular report on antibiotic resistance  because of a cover story in the Economist magazine of May 21, 2016: the Economist leader is here; the more detailed article here.

The Economies of Africa: Will Bust Follow Boom?

The economies of sub-Saharan Africa face a big question. Growth of real GDP in the last 15 year has averaged about 5% per year, as compared to 2% per year back in the 1980s and 1990s. But was this rapid growth mainly a matter of high prices for oil and other commodities, combined with high levels of China-driven external investment? If so, then Africa\’s growth is likely to diminish sharply now that oil and commodity prices have fallen and China\’s growth has slowed. Or was Africa\’s rapid growth in the last 15 years built at least in part on on sturdier and more lasting foundations? The June 2016 issue of Finance & Development, published by the International Monetary Fund, tackles this topic with a symposium of nine readable articles on \”Africa: Growth\’s Ups and Downs.\” In addition, the African Economic Outlook 2016, an annual report produced by the African Development Bank, the OECD Development Centre and the United Nations Development Programme, provides an overview of the economic situation in Africa as well as a set of chapters on the theme of \”Sustainable Cities and Structural Transformation.\”

The overall perspective seems to be that while growth rates across the countries of Africa seem certain to slow down, some of the rise in growth will persist–especially if various supportive public policy steps can be enacted. An article by Stephen Radelet in Finance & Development, \”Africa\’s Rise–Interrupted?\”, provides an overview of this perspective.

In summing up the current situation, Radelet writes:

\”At a deeper level, although high commodity prices helped many countries, the development gains of the past two decades—where they occurred— had their roots in more fundamental factors, including improved governance, better policy management, and a new generation of skilled leaders in government and business, which are likely to persist into the future. … Overall growth is likely to slow in the next few years. But in the long run, the outlook for continued broad development progress is still solid for many countries in the region, especially those that diversify their economies, increase competitiveness, and further strengthen institutions of governance. … The view that Africa’s surge happened only because of the commodity price boom is too simplistic. It overlooks the acceleration in growth that started in 1995, seven years before commodity prices rose; the impact of commodity prices, which varied widely across countries (and hurt oil importers); and changes in governance, leadership, and policy that were critical catalysts for change.\”

Here\’s a graphic showing some of the main changes across Africa in the last couple of decades.

Radelet emphasizes that the countries of Africa are diverse, and economic policies and development patterns across the countries will not be identical. But he offers five overall themes for continued economic progress in African with relatively broad applicability.

First up is adroit macroeconomic management. Widening trade deficits are putting pressure on foreign exchange reserves and currencies, tempting policymakers to try to artificially hold exchange rates stable. Parallel exchange rates have begun to emerge in several countries. But since commodity prices are expected to remain low, defending fixed exchange rates is likely to lead to even bigger and more difficult exchange rate adjustments down the line. As difficult as it may be, countries must allow their currencies to depreciate to encourage exports, discourage imports, and maintain reserves. At the same time, budget deficits are widening, and with borrowing options limited, closing the gaps requires difficult choices. …

Second, countries must move aggressively to diversify their economies away from dependence on commodity exports. Governments must establish more favorable environments for private investment in downstream agricultural processing, manufacturing, and services (such as data entry), which can help expand job creation, accelerate long-term growth, reduce poverty, and minimize vulnerability to price volatility. … The exact steps will differ by country, but they begin with increasing agricultural productivity, creating more effective extension services, building better farm-to-market roads, ensuring that price and tariff policies do not penalize farmers, and investing in new seed and fertilizer varieties. Investments in power, roads, and water will be critical. As in east Asia, governments should coordinate public infrastructure investment in corridors, parks, and zones near population centers to benefit firms through increased access to electricity, lower transportation costs, and a pool of nearby workers, which can significantly reduce production costs. … At the same time, the basic costs of doing business remain high in many countries. To help firms compete, governments must lower tariff rates, cut red tape, and eliminate unnecessary regulations that inhibit business growth. Now is the time to slash business costs and help firms compete domestically, regionally, and globally.

Third, Africa’s surge of progress cannot persist without strong education and health systems. The increases in school enrollment and completion rates, especially for girls, are good first steps. But school quality suffers from outdated curricula, inadequate facilities, weak teacher training, insufficient local control, teacher absenteeism, and poor teacher pay. … Similarly, health systems remain weak, underfunded, and overburdened …
Fourth, continued long-term progress requires building institutions of good governance and deepening democracy. The transformation during the past two decades away from authoritarian rule is remarkable, but it remains incomplete. Better checks and balances on power through more effective legislative and judicial branches, increased transparency and accountability, and strengthening the voice of the people are what it takes to sustain progress. …

Finally, the international community has an important role to play. Foreign aid has helped support the surge of progress, and continued assistance will help mitigate the impacts of the current slowdown. Larger and longer-term commitments are required, especially for better-governed countries that have shown a strong commitment to progress. To the extent possible, direct budget support will help ease adjustment difficulties for countries hit hardest by commodity price shocks. In addition, donor financing for infrastructure—preferably as grants or low-interest loans—will help build the foundation for long-term growth and prosperity. Meanwhile, this is not the time for rich countries to turn inward and erect trade barriers. Rather, wealthy nations should encourage further progress and economic diversification by reducing barriers to trade for products from African countries whose economies are least developed.

One possible reaction to a list like that one is \”yikes.\” If countries of Africa need all of those things to go right, then optimism about Africa\’s economic future begins to look like foolhardiness. But the other possible reaction is that not everything needs to go right all the time for ongoing progress to happen.

The African Development Outlook 2016 fleshes out many of these theme in more detail, and offers some of its own. One theme the report emphasizes is the centrality of urban areas to the development path in many African countries (citations omitted from quota.

The African continent is urbanising fast. The share of urban residents has increased
from 14% in 1950 to 40% today. By the mid-2030s, 50% of Africans are expected to become urban dwellers … However, urbanisation is a necessary but insufficient condition for structural transformation. Many countries that are more than 50% urbanised still have low-income levels. Urbanisation per se does not bring economic growth, though concentrating economic resources in one place can bring benefits. Further, rapid urbanisation does not necessarily correlate with fast economic growth: Gabon has a high annual urbanisation rate at 1 percentage point despite a negative annual economic growth rate of -0.6% between 1980 and 2011.

In addition, the benefits of agglomeration greatly depend on the local context, including
the provision of public goods. Public goods possess non-rivalry and non-excludable benefits. Lack of sufficient public goods or their unsustainable provision can impose huge costs on third parties who are not necessarily involved in economic transactions. Congestion, overcrowding, overloaded infrastructure, pressure on ecosystems, higher costs of living, and higher labour and property costs can offset the benefits of concentrating economic resources in one place. These negative externalities tend to increase as cities grow. This is especially true if urban development is haphazard and public investment does not maintain and expand essential infrastructure. Dysfunctional systems, gridlocks, power cuts and insecure water supplies increase business costs, reduce productivity and deter private investment. In OECD countries, cities beyond an estimated 7 million inhabitants tend to generate such diseconomies of agglomeration. Hence, the balance between agglomeration economies and diseconomies may have an important influence on whether city economies continue to grow, stagnate or begin to decline.

The report also comments on what is calls \”three-sector\” development theory, with is the notion that economies move from being predominantly agricultural, to growth in manufacturing, to growth in services. In the context of African nations, it\’s not clear how economies with large oil or mineral resources fit into this framework, and in a world economy with rapidly growing robotics capabilities, it\’s not clear that low-wage manufacturing can work as a development path across Africa similar to the way that it did in, say, many parts of Asia. Here\’s a quick discussion of sectors of growth across Africa:

An examination of the fastest-growing African countries over the past five years reveals very different sector patterns (Table 1.2). In Nigeria, structural changes seem to be in accordance with traditional three-sector theory, as shares of the primary sector  declined while those of other sectors increased. The share of agriculture also declined in many other countries, but increased in Kenya and Tanzania. The share of extractive industries declined in some countries but increased in others as new production started and boosted growth (oil in Ghana and iron-ore mining in Sierra Leone). The share of manufacturing increased in only a few countries (Niger, Nigeria and Uganda), but remained broadly constant or even declined in many others. In contrast, the construction and service sectors were important drivers of growth in many countries. In short, African countries are achieving growth performance with quite different sectoral patterns. However, the simplistic three-sector theory can be misleading as productivity is not only raised by factor reallocation between sectors, but also through modernisation and reallocation within sectors, as well as via better linkages between sectors. In particular, higher productivity in agriculture can boost food processing and leather processing and manufacturing to the benefit of both sectors.

For me, the ongoing theme in all discussions of Africa\’s economic future is an oscillation between encouragement over the progress that has occurred and a disheartened recognition of how much remains to be done. For example, the report includes a figure showing that hotel rooms across the countries of sub-Saharan Africa have growth by two-thirds in the last five years.

Hotels are in some ways a  proxy for a certain level of business development, mobility between cities, local income levels, and tourism potential, so this rise is promising. On the other side, the total for all of sub-Saharan Africa is roughly 50,000 hotel rooms, and for comparison, the city of Las Vegas alone claims to have almost 150,000 hotel/motel rooms.

For those who want more, where are links to the full list of articles about Africa in the June 2016 Finance & Development:

Allocation of Scarce Elevators

In a perfect world, an elevator would always be waiting for me, and it would always take me to my desired floor without stopping along the way. But economics is about scarce resources. What about the problem of scarce elevators?

Jesse Dunietz offers an accessible overview to how such decisions are made \”The Hidden Science of Elevators: How powerful algorithms decide when that elevator car is finally going to come pick you up,\” in Popular Mechanics (May 24, 2016). For those who want all the details, Gina Barney and Lutfi Al-Sharif have just published the second edition of Elevator Traffic Handbook: Theory and Practice, which with its 400+ pages seems to be the definitive book on this subject (although when I checked, still zero reviews of the book on Amazon). Some of the tome can be sampled here via Google. For example, it notes at the start: 

\”The vertical transportation problem can be summarised as the requirement to move a specific number of passengers from their origin floors to their respective destination floors with the minimum time for passenger waiting and travelling, using the minimum number of lifts, core space, and cost, as well as using the smallest amount of energy.\” 

This problem of allocating elevators is complex in detail: not just the basics like number and size of elevators, the total number of passengers, and the height of the building, but also questions of the usual timing of peak loads of passengers. Moreover, the problem is complex because passengers prefer short wait and travel times, which are costs of time imposed on them, while building owners prefer a smaller cost for elevators, which they pay.  It turns out that many people would rather have a shorter waiting time for an elevator, even if it might mean a longer travel time once inside the elevator. But although the problem of allocating elevators may not have a single best answer, some answers are better than others.  
Of course, in the early days of elevators, they often had an actual human operator. When automated elevators arrived and up until about a half-century ago, Dunietz explains in Popular Mechanics that many of them operated  rather like a bus route: that is, they went up and down between floors on a preset timetable. Of course, this meant that passengers just had to wait for the elevator to cycle around to their floor, and the elevator ran even when it was empty. 
In the mid-1960s, the \”elevator algorithm\” was developed. Dunietz describes it with two rules:

  1. As long as there\’s someone inside or ahead of the elevator who wants to go in the current direction, keep heading in that direction.
  2. Once the elevator has exhausted the requests in its current direction, switch directions if there\’s a request in the other direction. Otherwise, stop and wait for a call.

Not only is this algorithm still pretty common for elevators, but it is also used to govern the motion of disk drives when facing read and write request–and the algorithm has its own Wikipedia entry.

However, if you think about how the elevator algorithm works in tall buildings, you realize that it will spend a lot of time in the middle floors, and the waits at the top and the bottom can be extreme. Moreover, if a building has a bunch of elevators all responding to the same signals, all the elevators tend to bunch up near the middle floors, even leapfrogging each other and trying to answer the same signals. So the algorithm was tweaked so that only one elevator would respond to any given signal. Buildings were sometimes divided, so that some elevators only ran to certain groups of floors. Also, when an elevator was not in use, it would automatically return to the lobby (or some other high-departure floor).

By the 1970s, it becomes possible to encode the rules for allocating elevators into software, which can be tweaked and adjusted. For example, it becomes possible to use \”estimated time of arrival\” calculations (for example, here) which figures out which car can respond to a call first. Such algorithms can also take energy use or length-of-journal or other factors into account.
Another big step forward in the last decade or so is \”destination dispatch,\” where when you call the elevator, you also tell it which floor you will be going to. The elevator system can then group together people heading for similar floors. An article by Melanie D.G. Kaplan  on ZDNet.com back in 2012 talks about how this kind of system created huge gains for the Marriott Marquis in Times Square in New York City. Before this system, people could wait 20-30 minutes for an elevator to show up. After the system was installed, there can still be some minutes of waiting at peak times, but as one measure, the number of written complaints about elevator delays went from five per week (!) to zero.

The latest thing, as one might expect, is \”machine learning\”–that is, define for the elevator system what \”success\” looks like, and then let the elevator system experiment and learn about how to allocate elevators not just at a given moment in time, but to remember how elevator traffic evolves from day to day and adjust for that as well. The definition of \”success\” may vary across buildings: for example, \”success\” in a system of hospital elevators might mean that urgent health situations get an immediate elevator response, even if waiting time for others is increased. The machine learning approach leads to academic papers like: \”The implementation of reinforcement learning algorithms on the elevator control system,\” and ongoing research published in various places like the proceedings of the annual conferences of the International Society of Elevator Engineers, or publications like the IEEE Transactions on Automation Science and Engineering
From an economic point of view, it will be intriguing to see how the machine learning rules evolve. In particular, it will be interesting to see if the the machine learning rules that address the various tradeoffs of wait time, travel time, handling peak loads, and energy cost can be formulated in terms of the marginal costs and benefits framework that economist prefer–and whether the rules for elevator traffic find a use in organizing other kinds of traffic, from cars to online data. 

US Corporate Stock: The Transition in Who Owns It

It used to be that most US corporate stock was held by taxable US investors. Now, most corporate stock is owned by a mixture of tax-deferred retirement accounts and foreign investors. Steven M. Rosenthal and Lydia S. Austin describe the transition in \”The Dwindling Taxable Share
Of U.S. Corporate Stock,\” which appeared in Tax Notes (May 16, 2016, pp. 923-934), and is available here at website of the ever-useful Tax Policy Center.

The gray area in the figure below shows the share of total US corporate equity owned by taxable accounts. A half-century ago in the late 1960s, more than 80% of all corporate stock was held in taxable accounts; now, it\’s around 25% The blue area shows the share of US corporate stock held by retirement plans,which is now about 35% of the total. The area above the blue line at the top of the figure shows the share of US corporate stock owned by foreign investors, which has now risen to 25%.

A few quick thoughts here:

1) These kinds of statistics require doing some analysis and extrapolation from various Federal Reserve data sources. Those who want details on methods should head for the article. But the results here are reasonably consistent with previous analysis.

2) The figures here are all about ownership of US corporate stock; that is, they don\’t have anything to say about US ownership of foreign stock.

3) One dimension of the shift described here is the ownership of US stock is shifting from taxable to less-taxable forms. Stock returns accumulate untaxed in retirement accounts until the fund are actually withdrawn and spent, which can happen decades later and (because post-retirement income is lower) at a lower tax rate.  Foreigners who own US stock pay very little in US income tax–instead, they are responsible for taxes back in their home country.

4) There is an ongoing dispute about how to tax corporations. Economists are fond of pointing out that a corporation is just an organization, so when it pays taxes the money must come from some actual person, and the usual belief is that it comes from investors in the firm. If this is true, then cutting corporate taxes a  half-century ago would have tended to raise the returns for taxable investors. However, cutting corporate taxes now would tend to raise returns for untaxed or lightly-taxes retirement funds and foreign investors. The tradeoffs of raising or lower corporate taxes have shifted.

Lessons for the Euro from Early American History

The euro is still a very young currency. When watching the struggles of the European Union over the the euro, it\’s worth remembering that it too the US dollar a long time to become a functional currency. Jeffry Frieden looks at \”Lessons for the Euro from Early American Monetary and Financial Experience,\” in a contribution written for the Bruegel Essay and Lecture Series published May 2016. Frieden\’s lecture on the paper can be watched here. Here\’s how Frieden starts:

\”Europe’s central goal for several decades has been to create an economic union that can provide monetary and financial stability. This goal is often compared, both by those that aspire to an American-style fully federal system and by those who would like to stop short of that, to the long-standing monetary union of the United States. The United States, after all, created a common monetary policy, and a banking union with harmonised regulatory standards. It backs the monetary and banking union with a series of automatic fiscal stabilisers that help soften the potential problems inherent in inter-regional variation.

Easy celebration of the successful American union ignores the fact that it took an extremely long time to accomplish. In fact, the completion of the American monetary, fiscal, and financial union is relatively recent. Just how recent depends on what one counts as an economic and monetary union, and how one counts. Despite some early stops and starts, the United States did not have an effective national currency until 75 years after the Constitution was adopted, starting with the National Banking Acts of 1863 and 1864. And only after another fifty years did the country have a central bank. Financial regulations have been fragmented since the founding of the Republic; many were federalised in the 1930s, but many remain decentralised. And most of the fiscal federalist mechanisms touted as prerequisites for a successful monetary union date to the 1930s at the earliest, and in some cases to the 1960s. The creation and completion of the American monetary and financial union was a long, laborious and politically conflictual process.

Freiden focuses in particular on some seminal events from the establishment of the US dollar. For example, there\’s a discussion of \”Assumption,\” the policy under which Alexander Hamilton had \”the Federal government recognise the state debts and exchange them for Federal obligations, which would be serviced.This meant that the Federal governments would assume the debts of the several states and pay them off at something approaching face value.\” But after the establishment of a federal market for debt, the US government in the 1840s decided that it would not assume the debt of bankrupt states. A variety of other episode are put into a broader context. In terms of overall lessons from early US experience for Europe as it seeks to establish the euro, it suggests that while Europe has created the euro, existing European institutions are not yet strong enough to sustain it:

One of the problems that Europe has faced in the past decade is the relative weakness of European institutions. Americans and foreigners had little reason to trust the willingness or ability of the new United States government to honour its obligations. Likewise, many in Europe and elsewhere have doubts about the seriousness with which EU and euro-area commitments can be taken. Just as Hamilton and the Americans had to establish the authority and reliability of the central, Federal, government, the leaders of the European Union, and of its member states, have to establish the trustworthiness of the EU’s institutions. And the record of the past ten years points to an apparent inability of the region’s political leaders to arrive at a conclusive resolution of the debt crisis that has bedevilled Europe since 2008. …

The central authorities – the Federal government in the American case, the institutions of the euro area and the EU in the European case – have to establish their ability to address crucial monetary and financial issues in a way acceptable to all member states. This requires some measure of responsibility for the behaviour of the member states themselves, which the central authority must counter-balance against the moral hazard that it creates.  In the American case, the country dealt with these linked problems over a sixty-year period. Assumption established the seriousness of the central government, but also created moral hazard. The refusal to assume the debts of defaulting states in the 1840s established the credibility of the Federal government’s no-bailout commitment. Europe today faces both of these problems, and the attempt to resolve them simultaneously has so far failed. Proposals to restructure debts are rejected as creating too much moral hazard, but the inability to come up with a serious approach to unsustainable debts has sapped the EU of most of its political credibility. Both aspects of central policy are essential: the central authorities must instil faith in the credibility of their commitments, and do so without creating unacceptable levels of moral hazard.

This is not, of course, to suggest that the European Union should assume the debts of its member states. Europe’s national governments have far greater capacity, and far greater resources, than did the nascent American states. But the lack of credibility of Europe’s central institutions is troubling, and is reminiscent of the poor standing of the new United States before 1789.

The US monetary and financial architecture evolved over decades, but in a country that was somewhat tied together with a powerful origin story–and nevertheless had to fight a Civil War to remain a single country. The European Union monetary and financial organization is evolving, too, but I\’m not confident that the pressures of a globalized 21st century economy will give them decades to resolve the political conflicts, build the institutions, and create the credibility that the euro needs if it is to be part of broadly shared economic stability and growth in Europe.