Update on Remittances

Remittances refers to money that emigrants send back to their country of origin, and in the context of global capital flows, they are a big deal. The report Migration and Remittances: Recent Developments and Outlookpublished by Global Knowledge Partnership on Migration and Development (KNOMAD) in April 2016, offers an overview of trends and patterns. (For the record, KNOMAD is funded by the governments of Germany, Sweden, and Switzerland, and administered by the World Bank.)

Consider four of the ways in which capital can move to the economies developing countries: 1) foreign direct investment (that is, having an ownership share in the investment); 2) private and equity; 3) official development assistance; and 4) remittances. Back in the mid-1990s, these four were all very roughly the same size, or at least within shouting distance of each other. But while official development assistance has risen a bit, the other three categories have all grown dramatically, as shown in the figure.

Remittances are about the same size as flows of private debt and equity. And while remittances haven\’t grown as fast as foreign direct investment, it has risen much more steadily and without the peaks and valleys. Remittances aren\’t driven by the fluctuations and trend-chasing that affect foreign direct investment and private investments in debt and equity. But remittances are affected by broad economic changes: for example, the fall in the price of oil means that remittances from Russia dropped, while the improvement in the US economy in the last few years has helped to raise remittances moving from the US to Latin America.But the

One reason for the rise in remittances is just that the total number of who have migrated to a different country has risen–although the rise in the total global number of migrants of about 50% in the last 20 years doesn\’t come close to explaining the rise in remittances during that time.

Another change is that it has become cheaper and easier over time for a worker to send funds back to his or her country of origin. The decline in the last few years is mild, but real. However, I suspect that if comparable statistics were available for the 1990s, the task of sending funds home would be considerably more complex and costly.

Remittances don\’t get as much attention as they deserve. We\’re often focused on the international capital flows connected with government, or with big global banks and businesses. But while remittances are very large overall at $433 billion in 2015, they mainly consist of relatively small-scale payments happening between family and community networks. In many cases, remittances provide the cash for ordinary families to start a small business or pay for extra education. The report points out that when there is a natural disaster in many countries, remittances to that country rise substantially as people around the world help out.

For a more detailed overview of this topic, I recommend Dean Yang\’s article on \”Migrant Remittances\” in the Spring 2011 issue of theJournal of Economic Perspectives as a starting point. (Full disclosure: I\’ve labored in the fields as Managing Editor of JEP since the first issue in 1987.)

The Collective Action Problem of Resistance to Antibiotics

\”We estimate that by 2050, 10 million lives a year and a cumulative 100 trillion USD of economic output are at risk due to the rise of drug-resistant infections if we do not find proactive solutions now to slow down the rise of drug resistance. Even today, 700,000 people die of resistant infections every year. Antibiotics are a special category of antimicrobial drugs that underpin modern medicine as we know it: if they lose their effectiveness, key medical procedures (such as gut surgery, caesarean sections, joint replacements, and treatments that depress the immune system, such as chemotherapy for cancer) could become too dangerous to perform. Most of the direct and much of the indirect impact of AMR [anti-microbial resistance] will fall on low and middle‑income countries. It does not have to be this way. ..  The economic impact is also already material. In the US alone, more than two million infections a year are caused by bacteria that are resistant to at least first-line antibiotic treatments, costing the US health system 20 billion USD in excess costs each year.\” 

This is from the report \”Tackling Drug-Resistant Infections Globally: Final Report and Recommendations,\” from the Review on Antimicrobial Resistance that was set up by the UK government, funded by the Wellcome Trust and the UK Department of Health, and chaired by Jim O’Neill (who was chief economist at Goldman Sachs for many years and is known as the originator of the acronym \”BRICs\” to refer to the emerging economies of Brazil, Russia, India, and China.) Background reports are supporting documentation for the report are available here.

For economists, antibiotic resistance falls into the analytical category of collective action problems, which are situations where economic actors in pursuit of private gain have no incentive to take a social cost into account. Problems of air and water pollution can fall into this this category. In the case of antibiotics, they clearly help many sick people and can help livestock gain weight, too. But those using antibiotics for private gain have no incentive to take into account that when they are commonly used, resistance to them evolves in a way that can make them less effective, or just plain ineffective. The report(on p. 16)  includes a discussion of the issue of antibiotic resistance in the terms economists prefer to use: externalities, imperfect information, and public goods.

The policies to address this issue are conceptually straightforward. Over the longer-term, provide incentives for companies to do research and development that can lead to new antibiotics (as well as other methods of fighting bacterial infections). Given that many existing antibiotics are off-patent and available in cheap generic versions, and also given that doctors might prefer to hold fancy new antibiotics in reserve unless or until the current versions don\’t work, trying to create new antibiotics may not look like a very encouraging market to pursue without some additional policy steps. But in the shorter-term, the policies need to be about reducing the overly casual use of antibiotics. If antibiotics are used only when really needed, then the problem of antibiotic resistance can be mitigated. Here are a few of the steps along these lines that jumped out at me from the report.

1) In the past, many doctors have prescribe antibiotics on the \”it can\’t hurt\” philosophy, and while antibiotics are unlikely to hurt that particular patient, the broader social problem of antibiotic resistance can indeed hurt. Thus, one set of policies would encourage doctors to prescribe antibiotics only when really needed. \”One study showed that in Belgium, campaigns to reduce antibiotic use during the winter flu season, resulted in a 36 percent reduction in prescriptions. Over 16 years, the cumulative savings in drug costs alone amounted to around 130 Euros (150 USD) per Euro spent on the campaign.\”  The key point here is that antibiotics only work against bacterial infections, and do nothing at all against viruses. The report points out that diarrhoeal illness kills about 1.1 million people per year in low and middle-income countries. Howver, about \”70 percent of episodes of diarrhoeal illness are caused by viral, rather than bacterial infections, against which antibiotics are ineffective – and yet antibiotics will frequently be used as a treatment.\”

Perhaps the most important development here would be rapid diagnostic tools, so that doctors could tell more or less in real time–or perhaps within a few hours–if an infection is bacterial and which specific bacteria is involved. This technology would mean that antibiotics could be used much less and targeted much better. As the report notes, that is not what happens now.

\”When doctors and other medical professionals decide whether to prescribe an antibiotic, they usually use so-called ‘empirical’ diagnosis: they will use their expertise, intuition and professional judgement to ‘guess’ whether an infection is present and what is likely to be causing it, and thus the most appropriate treatment. In some instances, diagnostic tools are used later to confirm or change that prescription. This process has remained basically unchanged in decades: most of these tests are lab-based, and would look familiar to a doctor trained in the 1950s, using processes that originated in the 1860s. Bacteria must be cultured for 36 hours or more to confirm the type of infection and the drugs to which it is susceptible. An acutely ill patient cannot wait this long for treatment, and even when the health risks are not that high, most doctors’ surgeries and pharmacies are under time, patient and financial pressure, and must address patients’ needs much faster.\”

2) Take public health measures to avoid people getting sick in the first place, so that antibiotics are less-needed for that reason. Especially in developing countries, major steps to reduce disease include better sanitation and clean water, along with vaccination campaigns. In developed countries, a main focus should be to reduce infections that arise in health care settings: \”Across developed countries, between seven and 10 percent of all hospital inpatients will contract some form of healthcare‑associated infection (HCAI), a figure that rises to one patient in three in intensive care units (ICUs). These levels of incidence are even higher in low and middle-income settings,
where healthcare facilities can face extreme constraints, sometimes as fundamental as access to running water for cleaning and handwashing.\”

3) Dramatically reduce the use of antibiotics in agriculture, where they are often used not just to treat animals who are sick, but as a sort of all-purpose aid to keep animals from falling sick and to help them gain weight.  These antibiotics often work into the environment–say, through disposal of animal waste products–and thus spur bacteria to become resistanc. \”The quantity of antibiotics used in livestock is vast, and often includes those medicines that are important for humans. In the US, for example, of the antibiotics defined as medically important for humans by the FDA, over 70 percent of the total volume used (by weight) are sold for use in animals. Many other countries are also likely to use more antibiotics in agriculture than in humans but they do not even hold or publish the information.\”

Moving ahead with these kinds of policy steps should be an urgent priority. A family of bacteria resistant to the antibiotics usually saved for a last resort has recently been found in a US patient. (The scientific article on this discovery in the journal Antimicrobial Agents and Chemotherapy is available here.)

For two previous posts on antibiotic resistance, see:

Homage: Like many others, I suspect, I ran across this particular report on antibiotic resistance  because of a cover story in the Economist magazine of May 21, 2016: the Economist leader is here; the more detailed article here.

The Economies of Africa: Will Bust Follow Boom?

The economies of sub-Saharan Africa face a big question. Growth of real GDP in the last 15 year has averaged about 5% per year, as compared to 2% per year back in the 1980s and 1990s. But was this rapid growth mainly a matter of high prices for oil and other commodities, combined with high levels of China-driven external investment? If so, then Africa\’s growth is likely to diminish sharply now that oil and commodity prices have fallen and China\’s growth has slowed. Or was Africa\’s rapid growth in the last 15 years built at least in part on on sturdier and more lasting foundations? The June 2016 issue of Finance & Development, published by the International Monetary Fund, tackles this topic with a symposium of nine readable articles on \”Africa: Growth\’s Ups and Downs.\” In addition, the African Economic Outlook 2016, an annual report produced by the African Development Bank, the OECD Development Centre and the United Nations Development Programme, provides an overview of the economic situation in Africa as well as a set of chapters on the theme of \”Sustainable Cities and Structural Transformation.\”

The overall perspective seems to be that while growth rates across the countries of Africa seem certain to slow down, some of the rise in growth will persist–especially if various supportive public policy steps can be enacted. An article by Stephen Radelet in Finance & Development, \”Africa\’s Rise–Interrupted?\”, provides an overview of this perspective.

In summing up the current situation, Radelet writes:

\”At a deeper level, although high commodity prices helped many countries, the development gains of the past two decades—where they occurred— had their roots in more fundamental factors, including improved governance, better policy management, and a new generation of skilled leaders in government and business, which are likely to persist into the future. … Overall growth is likely to slow in the next few years. But in the long run, the outlook for continued broad development progress is still solid for many countries in the region, especially those that diversify their economies, increase competitiveness, and further strengthen institutions of governance. … The view that Africa’s surge happened only because of the commodity price boom is too simplistic. It overlooks the acceleration in growth that started in 1995, seven years before commodity prices rose; the impact of commodity prices, which varied widely across countries (and hurt oil importers); and changes in governance, leadership, and policy that were critical catalysts for change.\”

Here\’s a graphic showing some of the main changes across Africa in the last couple of decades.

Radelet emphasizes that the countries of Africa are diverse, and economic policies and development patterns across the countries will not be identical. But he offers five overall themes for continued economic progress in African with relatively broad applicability.

First up is adroit macroeconomic management. Widening trade deficits are putting pressure on foreign exchange reserves and currencies, tempting policymakers to try to artificially hold exchange rates stable. Parallel exchange rates have begun to emerge in several countries. But since commodity prices are expected to remain low, defending fixed exchange rates is likely to lead to even bigger and more difficult exchange rate adjustments down the line. As difficult as it may be, countries must allow their currencies to depreciate to encourage exports, discourage imports, and maintain reserves. At the same time, budget deficits are widening, and with borrowing options limited, closing the gaps requires difficult choices. …

Second, countries must move aggressively to diversify their economies away from dependence on commodity exports. Governments must establish more favorable environments for private investment in downstream agricultural processing, manufacturing, and services (such as data entry), which can help expand job creation, accelerate long-term growth, reduce poverty, and minimize vulnerability to price volatility. … The exact steps will differ by country, but they begin with increasing agricultural productivity, creating more effective extension services, building better farm-to-market roads, ensuring that price and tariff policies do not penalize farmers, and investing in new seed and fertilizer varieties. Investments in power, roads, and water will be critical. As in east Asia, governments should coordinate public infrastructure investment in corridors, parks, and zones near population centers to benefit firms through increased access to electricity, lower transportation costs, and a pool of nearby workers, which can significantly reduce production costs. … At the same time, the basic costs of doing business remain high in many countries. To help firms compete, governments must lower tariff rates, cut red tape, and eliminate unnecessary regulations that inhibit business growth. Now is the time to slash business costs and help firms compete domestically, regionally, and globally.

Third, Africa’s surge of progress cannot persist without strong education and health systems. The increases in school enrollment and completion rates, especially for girls, are good first steps. But school quality suffers from outdated curricula, inadequate facilities, weak teacher training, insufficient local control, teacher absenteeism, and poor teacher pay. … Similarly, health systems remain weak, underfunded, and overburdened …
Fourth, continued long-term progress requires building institutions of good governance and deepening democracy. The transformation during the past two decades away from authoritarian rule is remarkable, but it remains incomplete. Better checks and balances on power through more effective legislative and judicial branches, increased transparency and accountability, and strengthening the voice of the people are what it takes to sustain progress. …

Finally, the international community has an important role to play. Foreign aid has helped support the surge of progress, and continued assistance will help mitigate the impacts of the current slowdown. Larger and longer-term commitments are required, especially for better-governed countries that have shown a strong commitment to progress. To the extent possible, direct budget support will help ease adjustment difficulties for countries hit hardest by commodity price shocks. In addition, donor financing for infrastructure—preferably as grants or low-interest loans—will help build the foundation for long-term growth and prosperity. Meanwhile, this is not the time for rich countries to turn inward and erect trade barriers. Rather, wealthy nations should encourage further progress and economic diversification by reducing barriers to trade for products from African countries whose economies are least developed.

One possible reaction to a list like that one is \”yikes.\” If countries of Africa need all of those things to go right, then optimism about Africa\’s economic future begins to look like foolhardiness. But the other possible reaction is that not everything needs to go right all the time for ongoing progress to happen.

The African Development Outlook 2016 fleshes out many of these theme in more detail, and offers some of its own. One theme the report emphasizes is the centrality of urban areas to the development path in many African countries (citations omitted from quota.

The African continent is urbanising fast. The share of urban residents has increased
from 14% in 1950 to 40% today. By the mid-2030s, 50% of Africans are expected to become urban dwellers … However, urbanisation is a necessary but insufficient condition for structural transformation. Many countries that are more than 50% urbanised still have low-income levels. Urbanisation per se does not bring economic growth, though concentrating economic resources in one place can bring benefits. Further, rapid urbanisation does not necessarily correlate with fast economic growth: Gabon has a high annual urbanisation rate at 1 percentage point despite a negative annual economic growth rate of -0.6% between 1980 and 2011.

In addition, the benefits of agglomeration greatly depend on the local context, including
the provision of public goods. Public goods possess non-rivalry and non-excludable benefits. Lack of sufficient public goods or their unsustainable provision can impose huge costs on third parties who are not necessarily involved in economic transactions. Congestion, overcrowding, overloaded infrastructure, pressure on ecosystems, higher costs of living, and higher labour and property costs can offset the benefits of concentrating economic resources in one place. These negative externalities tend to increase as cities grow. This is especially true if urban development is haphazard and public investment does not maintain and expand essential infrastructure. Dysfunctional systems, gridlocks, power cuts and insecure water supplies increase business costs, reduce productivity and deter private investment. In OECD countries, cities beyond an estimated 7 million inhabitants tend to generate such diseconomies of agglomeration. Hence, the balance between agglomeration economies and diseconomies may have an important influence on whether city economies continue to grow, stagnate or begin to decline.

The report also comments on what is calls \”three-sector\” development theory, with is the notion that economies move from being predominantly agricultural, to growth in manufacturing, to growth in services. In the context of African nations, it\’s not clear how economies with large oil or mineral resources fit into this framework, and in a world economy with rapidly growing robotics capabilities, it\’s not clear that low-wage manufacturing can work as a development path across Africa similar to the way that it did in, say, many parts of Asia. Here\’s a quick discussion of sectors of growth across Africa:

An examination of the fastest-growing African countries over the past five years reveals very different sector patterns (Table 1.2). In Nigeria, structural changes seem to be in accordance with traditional three-sector theory, as shares of the primary sector  declined while those of other sectors increased. The share of agriculture also declined in many other countries, but increased in Kenya and Tanzania. The share of extractive industries declined in some countries but increased in others as new production started and boosted growth (oil in Ghana and iron-ore mining in Sierra Leone). The share of manufacturing increased in only a few countries (Niger, Nigeria and Uganda), but remained broadly constant or even declined in many others. In contrast, the construction and service sectors were important drivers of growth in many countries. In short, African countries are achieving growth performance with quite different sectoral patterns. However, the simplistic three-sector theory can be misleading as productivity is not only raised by factor reallocation between sectors, but also through modernisation and reallocation within sectors, as well as via better linkages between sectors. In particular, higher productivity in agriculture can boost food processing and leather processing and manufacturing to the benefit of both sectors.

For me, the ongoing theme in all discussions of Africa\’s economic future is an oscillation between encouragement over the progress that has occurred and a disheartened recognition of how much remains to be done. For example, the report includes a figure showing that hotel rooms across the countries of sub-Saharan Africa have growth by two-thirds in the last five years.

Hotels are in some ways a  proxy for a certain level of business development, mobility between cities, local income levels, and tourism potential, so this rise is promising. On the other side, the total for all of sub-Saharan Africa is roughly 50,000 hotel rooms, and for comparison, the city of Las Vegas alone claims to have almost 150,000 hotel/motel rooms.

For those who want more, where are links to the full list of articles about Africa in the June 2016 Finance & Development:

Allocation of Scarce Elevators

In a perfect world, an elevator would always be waiting for me, and it would always take me to my desired floor without stopping along the way. But economics is about scarce resources. What about the problem of scarce elevators?

Jesse Dunietz offers an accessible overview to how such decisions are made \”The Hidden Science of Elevators: How powerful algorithms decide when that elevator car is finally going to come pick you up,\” in Popular Mechanics (May 24, 2016). For those who want all the details, Gina Barney and Lutfi Al-Sharif have just published the second edition of Elevator Traffic Handbook: Theory and Practice, which with its 400+ pages seems to be the definitive book on this subject (although when I checked, still zero reviews of the book on Amazon). Some of the tome can be sampled here via Google. For example, it notes at the start: 

\”The vertical transportation problem can be summarised as the requirement to move a specific number of passengers from their origin floors to their respective destination floors with the minimum time for passenger waiting and travelling, using the minimum number of lifts, core space, and cost, as well as using the smallest amount of energy.\” 

This problem of allocating elevators is complex in detail: not just the basics like number and size of elevators, the total number of passengers, and the height of the building, but also questions of the usual timing of peak loads of passengers. Moreover, the problem is complex because passengers prefer short wait and travel times, which are costs of time imposed on them, while building owners prefer a smaller cost for elevators, which they pay.  It turns out that many people would rather have a shorter waiting time for an elevator, even if it might mean a longer travel time once inside the elevator. But although the problem of allocating elevators may not have a single best answer, some answers are better than others.  
Of course, in the early days of elevators, they often had an actual human operator. When automated elevators arrived and up until about a half-century ago, Dunietz explains in Popular Mechanics that many of them operated  rather like a bus route: that is, they went up and down between floors on a preset timetable. Of course, this meant that passengers just had to wait for the elevator to cycle around to their floor, and the elevator ran even when it was empty. 
In the mid-1960s, the \”elevator algorithm\” was developed. Dunietz describes it with two rules:

  1. As long as there\’s someone inside or ahead of the elevator who wants to go in the current direction, keep heading in that direction.
  2. Once the elevator has exhausted the requests in its current direction, switch directions if there\’s a request in the other direction. Otherwise, stop and wait for a call.

Not only is this algorithm still pretty common for elevators, but it is also used to govern the motion of disk drives when facing read and write request–and the algorithm has its own Wikipedia entry.

However, if you think about how the elevator algorithm works in tall buildings, you realize that it will spend a lot of time in the middle floors, and the waits at the top and the bottom can be extreme. Moreover, if a building has a bunch of elevators all responding to the same signals, all the elevators tend to bunch up near the middle floors, even leapfrogging each other and trying to answer the same signals. So the algorithm was tweaked so that only one elevator would respond to any given signal. Buildings were sometimes divided, so that some elevators only ran to certain groups of floors. Also, when an elevator was not in use, it would automatically return to the lobby (or some other high-departure floor).

By the 1970s, it becomes possible to encode the rules for allocating elevators into software, which can be tweaked and adjusted. For example, it becomes possible to use \”estimated time of arrival\” calculations (for example, here) which figures out which car can respond to a call first. Such algorithms can also take energy use or length-of-journal or other factors into account.
Another big step forward in the last decade or so is \”destination dispatch,\” where when you call the elevator, you also tell it which floor you will be going to. The elevator system can then group together people heading for similar floors. An article by Melanie D.G. Kaplan  on ZDNet.com back in 2012 talks about how this kind of system created huge gains for the Marriott Marquis in Times Square in New York City. Before this system, people could wait 20-30 minutes for an elevator to show up. After the system was installed, there can still be some minutes of waiting at peak times, but as one measure, the number of written complaints about elevator delays went from five per week (!) to zero.

The latest thing, as one might expect, is \”machine learning\”–that is, define for the elevator system what \”success\” looks like, and then let the elevator system experiment and learn about how to allocate elevators not just at a given moment in time, but to remember how elevator traffic evolves from day to day and adjust for that as well. The definition of \”success\” may vary across buildings: for example, \”success\” in a system of hospital elevators might mean that urgent health situations get an immediate elevator response, even if waiting time for others is increased. The machine learning approach leads to academic papers like: \”The implementation of reinforcement learning algorithms on the elevator control system,\” and ongoing research published in various places like the proceedings of the annual conferences of the International Society of Elevator Engineers, or publications like the IEEE Transactions on Automation Science and Engineering
From an economic point of view, it will be intriguing to see how the machine learning rules evolve. In particular, it will be interesting to see if the the machine learning rules that address the various tradeoffs of wait time, travel time, handling peak loads, and energy cost can be formulated in terms of the marginal costs and benefits framework that economist prefer–and whether the rules for elevator traffic find a use in organizing other kinds of traffic, from cars to online data. 

US Corporate Stock: The Transition in Who Owns It

It used to be that most US corporate stock was held by taxable US investors. Now, most corporate stock is owned by a mixture of tax-deferred retirement accounts and foreign investors. Steven M. Rosenthal and Lydia S. Austin describe the transition in \”The Dwindling Taxable Share
Of U.S. Corporate Stock,\” which appeared in Tax Notes (May 16, 2016, pp. 923-934), and is available here at website of the ever-useful Tax Policy Center.

The gray area in the figure below shows the share of total US corporate equity owned by taxable accounts. A half-century ago in the late 1960s, more than 80% of all corporate stock was held in taxable accounts; now, it\’s around 25% The blue area shows the share of US corporate stock held by retirement plans,which is now about 35% of the total. The area above the blue line at the top of the figure shows the share of US corporate stock owned by foreign investors, which has now risen to 25%.

A few quick thoughts here:

1) These kinds of statistics require doing some analysis and extrapolation from various Federal Reserve data sources. Those who want details on methods should head for the article. But the results here are reasonably consistent with previous analysis.

2) The figures here are all about ownership of US corporate stock; that is, they don\’t have anything to say about US ownership of foreign stock.

3) One dimension of the shift described here is the ownership of US stock is shifting from taxable to less-taxable forms. Stock returns accumulate untaxed in retirement accounts until the fund are actually withdrawn and spent, which can happen decades later and (because post-retirement income is lower) at a lower tax rate.  Foreigners who own US stock pay very little in US income tax–instead, they are responsible for taxes back in their home country.

4) There is an ongoing dispute about how to tax corporations. Economists are fond of pointing out that a corporation is just an organization, so when it pays taxes the money must come from some actual person, and the usual belief is that it comes from investors in the firm. If this is true, then cutting corporate taxes a  half-century ago would have tended to raise the returns for taxable investors. However, cutting corporate taxes now would tend to raise returns for untaxed or lightly-taxes retirement funds and foreign investors. The tradeoffs of raising or lower corporate taxes have shifted.

Lessons for the Euro from Early American History

The euro is still a very young currency. When watching the struggles of the European Union over the the euro, it\’s worth remembering that it too the US dollar a long time to become a functional currency. Jeffry Frieden looks at \”Lessons for the Euro from Early American Monetary and Financial Experience,\” in a contribution written for the Bruegel Essay and Lecture Series published May 2016. Frieden\’s lecture on the paper can be watched here. Here\’s how Frieden starts:

\”Europe’s central goal for several decades has been to create an economic union that can provide monetary and financial stability. This goal is often compared, both by those that aspire to an American-style fully federal system and by those who would like to stop short of that, to the long-standing monetary union of the United States. The United States, after all, created a common monetary policy, and a banking union with harmonised regulatory standards. It backs the monetary and banking union with a series of automatic fiscal stabilisers that help soften the potential problems inherent in inter-regional variation.

Easy celebration of the successful American union ignores the fact that it took an extremely long time to accomplish. In fact, the completion of the American monetary, fiscal, and financial union is relatively recent. Just how recent depends on what one counts as an economic and monetary union, and how one counts. Despite some early stops and starts, the United States did not have an effective national currency until 75 years after the Constitution was adopted, starting with the National Banking Acts of 1863 and 1864. And only after another fifty years did the country have a central bank. Financial regulations have been fragmented since the founding of the Republic; many were federalised in the 1930s, but many remain decentralised. And most of the fiscal federalist mechanisms touted as prerequisites for a successful monetary union date to the 1930s at the earliest, and in some cases to the 1960s. The creation and completion of the American monetary and financial union was a long, laborious and politically conflictual process.

Freiden focuses in particular on some seminal events from the establishment of the US dollar. For example, there\’s a discussion of \”Assumption,\” the policy under which Alexander Hamilton had \”the Federal government recognise the state debts and exchange them for Federal obligations, which would be serviced.This meant that the Federal governments would assume the debts of the several states and pay them off at something approaching face value.\” But after the establishment of a federal market for debt, the US government in the 1840s decided that it would not assume the debt of bankrupt states. A variety of other episode are put into a broader context. In terms of overall lessons from early US experience for Europe as it seeks to establish the euro, it suggests that while Europe has created the euro, existing European institutions are not yet strong enough to sustain it:

One of the problems that Europe has faced in the past decade is the relative weakness of European institutions. Americans and foreigners had little reason to trust the willingness or ability of the new United States government to honour its obligations. Likewise, many in Europe and elsewhere have doubts about the seriousness with which EU and euro-area commitments can be taken. Just as Hamilton and the Americans had to establish the authority and reliability of the central, Federal, government, the leaders of the European Union, and of its member states, have to establish the trustworthiness of the EU’s institutions. And the record of the past ten years points to an apparent inability of the region’s political leaders to arrive at a conclusive resolution of the debt crisis that has bedevilled Europe since 2008. …

The central authorities – the Federal government in the American case, the institutions of the euro area and the EU in the European case – have to establish their ability to address crucial monetary and financial issues in a way acceptable to all member states. This requires some measure of responsibility for the behaviour of the member states themselves, which the central authority must counter-balance against the moral hazard that it creates.  In the American case, the country dealt with these linked problems over a sixty-year period. Assumption established the seriousness of the central government, but also created moral hazard. The refusal to assume the debts of defaulting states in the 1840s established the credibility of the Federal government’s no-bailout commitment. Europe today faces both of these problems, and the attempt to resolve them simultaneously has so far failed. Proposals to restructure debts are rejected as creating too much moral hazard, but the inability to come up with a serious approach to unsustainable debts has sapped the EU of most of its political credibility. Both aspects of central policy are essential: the central authorities must instil faith in the credibility of their commitments, and do so without creating unacceptable levels of moral hazard.

This is not, of course, to suggest that the European Union should assume the debts of its member states. Europe’s national governments have far greater capacity, and far greater resources, than did the nascent American states. But the lack of credibility of Europe’s central institutions is troubling, and is reminiscent of the poor standing of the new United States before 1789.

The US monetary and financial architecture evolved over decades, but in a country that was somewhat tied together with a powerful origin story–and nevertheless had to fight a Civil War to remain a single country. The European Union monetary and financial organization is evolving, too, but I\’m not confident that the pressures of a globalized 21st century economy will give them decades to resolve the political conflicts, build the institutions, and create the credibility that the euro needs if it is to be part of broadly shared economic stability and growth in Europe.

Interview with Matthew Gentzkow: Media, Brands, Persuasion

Douglas Clement has another of his thoughtful and revealing interviews with economists, this one with Matthew Gentzkow. It appeared online in The Region, a publication of the Federal Reserve Bank of Minneapolis, on May 23, 2016.  For a readable overview of Gentzkow\’s work, a useful starting point is an essay by Andrei Shleifer titled  \”Matthew Gentzkow, Winner of the 2014 Clark Medal,\”  and published in the Winter 2015 issue of the Journal of Economic Perspectives. The Clark medal, for those not familiar with it, is a prestigious award  given each year by the American Economoic Associstion \”to that American economist under the age of forty who is judged to have made the most significant contribution to economic thought and knowledge.\” Here are some answers from Gentzkow in the interview with Clement that caught my eye.

It seems to me that many discussions of politics neglect the entertainment factor. Politics isn\’t just about 30-page position papers and carefully worded statements. For lots of citizens and voters–and yes, for lots of politicians, too–it\’s a fun activity for observers and participants. Thus, when you think about how the spread of television (or newer media) affect voting, it\’s not enough just to talk about how media affect the information available to voters. It also matters if the new media just give the voters an alternative and nonpolitical source of entertainment. Here\’s a comment from Gentzkow on his research in this area:

I started thinking about this huge, downward trend that we’ve seen since about the middle of the 20th century in voter turnout and political participation. It’s really around the time that TV was introduced that that trend in the time series changes sharply, so I thought TV could have played a role.

Now, a priori, you could easily imagine it going either way. There’s a lot of evidence before and since that in many contexts, giving people more information has a very robust positive effect on political participation and voting. So, if you think of TV as the new source of information, a new technology for delivering political information, you might expect the effect to be positive. And, indeed, many people at the time predicted that this would be a very good thing for political participation.

On the other hand, TV isn’t just political information; it’s also a lot of entertainment. And in that research, I found that what seemed to be true is that the more important effect of TV is to substitute for—crowd out—a lot of other media like newspapers and radio that on net had more political content. Although there was some political content on TV, it was much smaller, and particularly much smaller for local or state level politics, which obviously the national TV networks are not going to cover.

So, we see that when television is introduced, indeed, voter turnout starts to decline. We can use this variation across different places and see that that sharp drop in voter turnout coincides with the timing of when TV came in. The more important effect of TV is to substitute for media that on net had more political content. So, we see that when television is introduced, indeed, voter turnout starts to decline. That drop is especially big in local elections. A lot of new technologies … are pushing people toward paying less attention to local politics, local issues, local communities.

People in different geographic areas show on average different consumption patterns. For example, Coke is more popular in some place, and Pepsi in others. Or imagine that someone moves from an area with high average health care spending to low average health care spending. Gentzkow and co-authors looked at people who moved from one geographic area to another, and how certain aspects of their consumption changed. Were people\’s preferences firmly established based on their previous location? Or did their preferences shift when they were in a new location? Here\’s how Gentzkow describes the differences between shifts in consumption related to brand preferences and shifts related to health care:

Well, imagine watching somebody move, first looking at how their brand preferences change; say they move from a Coke place to a Pepsi place and you see how their soft drink preferences change. Then imagine somebody moving from a place where there’s low spending on health care to a place with high spending, and you see how things change. In what way are those patterns different?

The first thing you can look at is how big the jump is when somebody moves. That’s sort of a direct measure of how important is the stuff you are carrying with you relative to the factors that are specific to the places. How important is your brand capital relative to the prices and the advertising? Or in a health care context, how important are the fixed characteristics of people that are different across places, relative to the doctors, the hospitals and the treatment styles across places. It turns out the jumps are actually very similar. In both cases, you close about half the gap between the place you start and the place you’re going, and so the share due to stuff people carry with them—their preference capital or their individual health—is about the same.

What’s very different and was a huge surprise to me, not what I would have guessed, is that with brands, you see a slow-but-steady convergence after people move; so, movers steadily buy more and more Pepsi the longer they live there. But in a health care context, we don’t see that at all; your health care consumption changes a discrete amount when you first move, but the trend is totally flat thereafter—it doesn’t converge at all.

Gentzkow\’s results on shifts in health care patterns may have some special applicability to thinking about how people react to finding themselves in a different and lower-spending health care system. Say that the change to this new system wasn\’t the result of a geographic shift–say, moving from a high-cost metro area where average spending on health care might be triple what it is in a low-cost area–but instead involved a change in policy. These results might imply that the policy reform would bring down health spending in a one-time jump, but then spending for the group that was used to being at a higher level would not continue to fall, as might have been predicted. 
Finally, here\’s an observation in passing from Gentzkow about social media. Are the new media a source of concern because they are not interactive enough (say, as compared to personal communication) or because they are too interactive and therefore addicting (say, as compared to television? Here\’s Gentzkow\”

A lot of people are complaining about social media now. But think back to what they were saying back when kids were all watching TV: “It’s this passive thing where kids sit there and zone out, and they’re not thinking, they’re alone, they’re not communicating!” Now, suddenly, a thing that kids are spending lots of their time doing is interacting with other kids. They’re writing text messages and posts and creating pictures and editing them on Instagram. It’s certainly not passive; it’s certainly not solitary. It has its own risks perhaps, but not the risks that worried people about TV. I think there’s a tendency, no matter what the new technology is, to wring our hands about its terrible implications. Kind of amazing how people have turned on a dime from worrying about one thing to worrying about its exact opposite.

The Tradeoffs of Parking Spots

Sometimes it seems as if every proposal for a new residential or commercial building in an urban or suburban area is neatly packaged with a dispute over parking. Will the new development provide  a minimum number of parking spaces? Will it be harder for those already in the are to find parking? How should the flow of drivers in and out of the parking area be arranged? Of course, all of these questions presume the cars and drivers need and deserve to be placed front and center of development decisions.

Donald Shoup, an urban economist who focuses on parking issues, discusses this focus on parking in
\”Cutting the Cost of Parking Requirements,\” an essay in the Spring 2016 issue of Access, a research center on surface transportation issues run by a number of University of California schools. Shoup starts this way:

At the dawn of the automobile age, suppose Henry Ford and John D. Rockefeller had hired you to devise policies to increase the demand for cars and gasoline. What planning regulations would make a car the obvious choice for most travel? First, segregate land uses (housing here, jobs there, shopping somewhere else) to increase travel demand. Second, limit density at every site to spread the city, further increasing travel demand. Third, require ample off-street parking everywhere, making cars the default way to travel.

American cities have unwisely embraced each of these car-friendly policies, luring people into cars for 87 percent of their daily trips. Zoning ordinances that segregate land uses, limit density, and require lots of parking create drivable cities but prevent walkable neighborhoods. Urban historians often say that cars have changed cities, but planning policies have also changed cities to favor cars over other forms of transportation.

Minimum parking requirements create especially severe problems. In The High Cost of Free Parking, I argued that parking requirements subsidize cars, increase traffic congestion and carbon emissions, pollute the air and water, encourage sprawl, raise housing costs, degrade urban design, reduce walkability, damage the economy, and exclude poor people. To my knowledge, no city planner has argued that parking requirements do not have these harmful effects. Instead, a flood of recent research has shown they do have these effects. We are poisoning our cities with too much parking. …

Parking requirements reduce the cost of owning a car but raise the cost of everything else. Recently, I estimated that the parking spaces required for shopping centers in Los Angeles increase the cost of building a shopping center by 67 percent if the parking is in an aboveground structure and by 93 percent if the parking is underground.

Developers would provide some parking even if cities did not require it, but parking requirements would be superfluous if they did not increase the parking supply. This increased cost is then passed on to all shoppers. For example, parking requirements raise the price of food at a grocery store for everyone, regardless of how they travel. People who are too poor to own a car pay more for their groceries to ensure that richer people can park free when they drive to the store. …

A single parking space, however, can cost far more to build than the net worth of many American households. In recent research, I estimated that the average construction cost (excluding land cost) for parking structures in 12 American cities in 2012 was $24,000 per space for aboveground parking, and $34,000 per space for underground parking

Shoup discusses California legislation that seeks to put a cap on minimum parking requirements. You can imagine how welcome this idea is. Another one of Shoup\’s parking projects is discussed by Helen Fessenden in \”Getting Unstuck,\” which asks \”Can smarter pricing provide a way out of clogged highways, packed parking, and overburdened mass transit?\” Fessenden\’s article appears in the Fourth Quarter 2015 issue of Econ Focus, which is published by the Federal Reserve Bank of Richmond. On the subject of parking, she writes:

Economist Don Shoup at the University of California, Los Angeles has spent decades researching the inefficiencies of the parking market — including the high cost of minimum parking requirements — but he is probably best known for his work on street parking. In 2011, San Francisco applied his ideas in a pilot project to set up \”performance pricing\” zones in its crowded downtown, and similar projects are now underway in numerous other cities — including, later this spring, in D.C. …

\”I had always thought parking was an unusual case because meter prices deviated so much from the market prices,\” says Shoup. \”The government was practically giving away valuable land for free. Why not set the price for on-street parking according to demand, and then use the money for public services?\”

Taking a cue from this argument, San Francisco converted its fixed-price system for on-street parking in certain zones into \”performance parking,\” in which rates varied by the time of day according to demand. In its initial run, the project, dubbed SFpark, equipped its meters with sensors and divided the day into three different price periods, with the option to adjust the rate in 25-cent increments, with a maximum price of $6 an hour. The sensors then gathered data on the occupancy rates on each block, which the city analyzed to see whether and how those rates should be adjusted. Its goal was to set prices to achieve target occupancy — in this case, between 60 percent and 80 percent — at all times. There was no formal model to predict pricing; instead, the city adjusted prices every few months in response to the observed occupancy to find the optimal rates.

The results: In the first two years of the project, the time it took to find a spot fell by 43 percent in the pilot areas, compared with a 13 percent fall on the control blocks. Pilot areas also saw less \”circling,\” as vehicle miles traveled dropped by 30 percent, compared with 6 percent on the control blocks. Perhaps most surprising was that the experiment didn\’t wind up costing drivers more, on net, because demand was more efficiently dispersed. Parking rates went up 31 percent of the time, dropped in another 30 percent of cases, and stayed flat for the remaining 39 percent. The overall average rate actually dropped by 4 percent.

A summary of the 2014 evaluation report for the SFPark pilot study is available here.

For many of us, parking spots are just a taken-for-granted part of the scenery. Shoup makes you see parking in a different way. Space is scarce in urban areas, and in many parts of suburban areas, too. Parking uses space. Next time you are cycling a block, looking for parking, or navigating a city street that is made narrower because cars are parked on both sides, or walking down a sidewalk corridor between buildings on one side and parked cars on the other, or wending your way in and out of a parking ramp, it\’s worth recognizing the tradeoffs of requiring and underpricing parking spaces.

Telemedicine

The American College of Physicians has officially endorsed \”telemedicine,\” which refers to using technology to connect a health care provider and a patient who aren\’t in the same place. An official statement of the ACP policy recommendations and a background position paper, written by Hilary Daniel and Lois Snyder Sulmasy, appear in the Annals of Internal Medicine (November 17, 2015, volume 163, number 10). The same issue includes an editorial on \”The Hidden Economics of Telemedicine,\” by David Asch, emphasizing that some of the most important costs and benefits of telemedicine are not about delivering the same care in an alternative way.  For starters, here\’s are some comments from the background paper (with footnotes and references omitted for readability):

Telemedicine can be an efficient, cost-effective alternative to traditional health care delivery that increases the patient\’s overall quality of life and satisfaction with their health care. Data estimates on the growth of telemedicine suggest a considerable increase in use over the next decade, increasing from approximately 350 000 to 7 million by 2018. Research analysis also shows that the global telemedicine market is expected to grow at an annual rate of 18.5% between 2012 and 2018. … [B]y the end of 2014, an estimated 100 million e-visits across the world will result in as much as $5 billion in savings for the health care system. As many as three quarters of those visits could be from North American patients. …

Telemedicine has been used for over a decade by Veterans Affairs; in fiscal year 2013, more than 600 000 veterans received nearly 1.8 million episodes of remote care from 150 VHA medical centers and 750 outpatient clinics. … The VHA\’s Care Coordination/Home Telehealth program, with the purpose of coordinating care of veteran patients with chronic conditions, grew 1500% over 4 years and saw a 25% reduction in the number of bed days, a 19% reduction in numbers of hospital readmissions, and a patient mean satisfaction score of 86% … 

The Mayo Clinic telestroke program uses a “hub-and-spoke” system that allows stroke patients to remain in their home communities, considered a “spoke” site, while a team of physicians, neurologists, and health professionals consult from a larger medical center that serves as the “hub” site. A study on this program found that a patient treated in a telestroke network, consisting of 1 hub hospital and 7 spoke hospitals, reduced costs by $1436 and gained 0.02 years of quality-adjusted life-years over a lifetime compared with a patient receiving care at a rural community hospital … 

The Antenatal and Neonatal Guidelines, Education and Learning System program at the University of Arkansas for Medical Sciences used telemedicine technologies to provide rural women with high-risk pregnancies access to physicians and subspecialists at the University of Arkansas. In addition, the program operated a call center 24 hours a day to answer questions or help coordinate care for these women and created evidence-based guidelines on common issues that arise during high-risk pregnancies. The program is widely considered to be successful and has reduced infant mortality rates in the state. …
An analysis of cost savings during a telehealth project at the University of Arkansas for Medical Sciences between 1998 and 2002 suggested that 94% of participants would have to travel more than 70 miles for medical care. …  Beyond the rural setting, telemedicine may aid in facilitating care for underserved patients in both rural and urban settings. Two thirds of the patients who participated in the Extension for Community Healthcare Outcomes program were part of minority groups, suggesting that telemedicine could be beneficial in helping underserved patients connect with subspecialists they would not have had access to before, either through direct connections or training for primary care physicians in their communities, regardless of geographic location.

Most of this seems reasonable enough, except for that pesky estimate up in the first paragraph that the global savings from telemedicine will amount to $5 billion per year on a global basis. The US health care system alone has average spending of more than $8 billion per day, every day of the years. Thus, this vision of telemedicine is that it will mostly just rearrange existing care–reach out to bring some additional people into the system, help reduce health care expenditures on certain conditions with better follow-up–but not be a truly disruptive force.

In his editorial essay in the same issue, David Asch points out: \”If there is something fundamentally different about telemedicine, it is that many of the costs it increases or decreases have been off the books.\” He offers a number of examples:

\”Some patients who would have visited the physician face to face instead have a telemedicine \”visit.\” They potentially gain a lot. There are no travel costs or parking fees. They might have to wait, but presumably they wait at home or at work where they can do something else (like many of us do when placed on hold). There is no waiting at all in asynchronous settings (the photograph of your rash is sent to your dermatologist, but you do not need a response right away). The costs avoided do not appear on the balance sheets of insurance companies or providers …  However, the costs avoided are meaningful even if they are not counted in official ways. There are the patients who would have forgone care entirely because the alternative was not a face-to-face visit but no visit. There are no neurologists who treat movement disorders in your region. The emergency department in your area could not possibly have a stroke specialist available at all times. …  We leave patients out when we ask how telemedicine visits compare with face-to-face visits: all of the patients who, without telemedicine, get no visit at all.

Savings for physicians, hospitals, and other providers are potentially enormous. Clinician-patient time in telemedicine is almost certainly shorter, requiring less of the chitchat that is hard to avoid in face-to-face interactions. There is no check-in at the desk. There is no need to devote space to waiting rooms (in some facilities, waiting rooms occupy nearly one half of usable space). No one needs to clean a room; heat it; or, in the long run, build it. That is the real opportunity of telemedicine. …

On the other hand, payers worry that if they reimburse for telemedicine, then every skin blemish that can be photographed risks turning from something that patients used to ignore into a payable insurance claim. Indeed, it is almost certainly true that if you make it easy to access care by telemedicine, telemedicine will promote too much care. However, the same concern could be reframed this way: An advantage of requiring face-to-face visits is that their inconvenience limits their use. Do we really want to ration care by inconvenience, or do we want to find ways to deliver valuable care as conveniently and inexpensively as possible?

I find myself wondering about ways in which telemedicine will be more disruptive. For example, consider the combination of telemedicine with technologies that enable remote monitoring of blood pressure, or blood sugar, or whether medications are being taken on schedule. Or consider telemedicine not just as a method of communicating with members of the American College of Physicians, but also as a way of communicating with nursing professionals, those who know about providing at-home care, various kinds of physical and mental therapists, along with social workers and others. There will be a wave of jobs in being the \”telemedicine gatekeeper\” who can answer the first wave of questions that most people ask, and then have access to resources for follow-up concerns. My guess is that these kinds of changes will be considerably more disruptive to traditional medical practice than a worldwide cost savings of $5 billion would seem to imply.

Homage: I ran across a mention of these reports at the always-interesting Marginal Revolution website.

Rising Tuition Discount Rates at Private Colleges

Colleges and universities announce a certain price for tuition, but based on financial aid calculations, they often charge a lot less. The difference is the \”institutional tuition discount rate.\” The National Association of College and University Business Officers (NACUBO) has just released a report with the average discount rate for 2015-16 based on a survey of 401 private nonprofit colleges (that is, not including branches of state university systems and not including for-profit colleges and universities), along with and how that rate has been evolving over time.

The two lines in the figure imply that the level financial help a student receives as a freshman, when making a choice between colleges, is going to be more than the financial help received in later years. Beware! More broadly, a strategy of charging ever-more to parents who can afford it, while offering ever-larger discounts to those who can\’t, does not seem like a sustainable long-run approach.