Some Economics of Refugees

Refugee policy is defined differently from immigration policy. With immigration policy, a nation makes a decision about what number and kinds of immigration (family-based, skill-based) would benefit itself. But refugee policy, at least under the standard definition from the 1951 Refugee Convention, involves whether someone who, “owing to well-founded fear of being persecuted for reasons of race, religion, nationality, membership of a particular social group or political opinion, is outside the country of his nationality and is unable or, owing to such fear, is unwilling to avail himself of the protection of that country…”

In  theory, refugee policy is based on the need of the refugees themselves, not on some judgment about whether letting them in will benefit the receiving society. But the receiving society does retain the power to make decisions about the extent to which people have a \”well-founded fear of being persecuted\” in the sense of the term that would make them refugees, or whether they are trying to use refugee status to do an end-run about immigration limits.  Evaluating that distinction will often involve some degree of subjectivity and politics.

For an overview of what we know about the magnitudes, drivers, and legalities of refugee flows and assimilation, I recommend the two-paper symposium in the Winter 2020 issue of the Journal of Economic Perspectives. (Full disclosure: I work as Managing Editor of JEP.)

As Hatton notes: \”The United Nations High Commissioner for Refugees (UNHCR) estimates the total number of refugees worldwide at the end of 2018 at 20.1 million. This is less than one-third of the total of 70.8 million `forcibly displaced persons,\’ which also includes those displaced within their home country (41.3 million) and Palestinians (5.5 million) who come under a separate mandate (UNHCR 2019, 2). In 2018, refugees were 7.6 percent of the stock of all international migrants (defined as those living outside their country of birth). … As of 2018, two-thirds of refugees are from just five countries: Syria, Afghanistan, South Sudan, Myanmar, and Somalia. Of the total, 85 percent of refugees are located in developing countries, often just across the border from the origin country, and about 30 percent of these languish in organized refugee camps.\”
Here\’s a figure from Hatton showing the number of asylum claims by refugees over time. The figure makes obvious why concerns over refugee policy have been high in the European Union, and also in the US.  About one-third of asylum applicants are granted refugee status. 
As one might expect, the drivers of refugee patterns are less about economic differences, and more about political terror in sending countries and proximity and access to another country. Hatton writes: 

Several studies have assessed the push and pull forces behind asylum applications to industrialized countries by analyzing panel data on the number of applicants by origin, by destination, and over time. The most important origin-country variables are political terror and lack of civil liberties; civil war matters less, perhaps because war per se does not necessarily confer refugee status (Hatton 2009, 2017a). There is weaker evidence that declines in origin-country income per capita leads to more asylum applications, which offers modest support to the view that economic migration is part of the story. Proximity and access are important in determining the volume of asylum applications. Countries that are small but nearby can generate large flows—as with a quarter of a million Cubans moving to the United States in the 1970s and 400,000 Serbians and Montenegrins moving to the European Union in 1995−2004—provided that the door is left ajar. But the growth of transit routes and migrant networks have fueled the upward trend of applications from more distant origins. For example, travel in caravans through Mexico combined with violence and drought at home, a growing diaspora, and mixed messages about future US policy all combined to boost migration from Central America (Capps et al. 2019).

Brell, Dustmann and Preston present evidence on assimilation of refugees–which is often rather different from the pattern of assimilation by immigrants. For example, one striking pattern is that refugees are often slower to find employment than immigrants. In Germany, about 10% of refugees have found employment two years after arriving, while about 60% of immigrants have found employment within two years. This shouldn\’t be a surprise: remember, immigrants come because they are seeking something, while refugees are escaping something. The gap between employment of refugees and immigrants does tend to close over time. The outlier here is the United States, where employment rats for refugees and immigrants are much the same, right from the start. As the authors write: \”It is not entirely clear why the US experience appears so different … possible explanations could relate to the nature of the US labor market or to the nature of the settlement process in the United States, but require further investigation.\”

There is also usually a wage gap between refugees and other immigrants, which also exists in the US.
\”For instance, while average wages of refugees who had been in the United States for two years amounted to 40 percent of native wages and 49 percent of other immigrants’ average wages, after 10 years, average wages had improved to 55 percent of natives and 70 percent of other immigrants in the same position.\”

What helps refugees to assimilate faster? Brell, Dustmann and Preston offer some non-obvious insights. One is that many refugees have experience substantial trauma, and their fear of being persecuted is based on recent experience. Thus, there is some evidence that paying attention to their mental and physical health needs soon after arriving can help assimilation start on a better path. Speeding up the asylum process itself, so that people do not languish for several years without being able to start their new adjustment, can help. Another issue is that politicians sometimes divide up refugees among many locations. However, social networks within a group can offer an important method of learning about jobs and opportunities and more generally how to function in a new society, so relocating refugees from a certain place so that they are in decent-sized groups can  help assimilation.

One interesting US pattern involves acquisition of language skills: \”[R]efugees arrive with lower levels of language proficiency than other migrants—at the time of migration, only about 44 percent of refugees speak English `well\’ or better, compared with 64 percent of other immigrants. However, while other immigrants do not tend to see particularly strong gains in English speaking skills over time, refugees rapidly improve and even overtake other migrants’ speaking abilities around ten years after arriving in the United States.\” One hypothesis is that immigrants may be living within an extended culture from their country of origin, and perhaps travelling back and forth now and then. But refugees are often more isolated, and they aren\’t going back, so their incentives to learn English are different.

(In the discreet shade of these parenthesis, I\’ll just note that at the end of this post it\’s annoying to me when public attention focuses so heavily on refugees who are seeking to enter the US and Europe, while largely ignoring refugees elsewhere or the group of displaced persons more broadly. At the peak a few years ago, the share of those making asylum claims in high-income countries was 1.5 million, a small share of the 70.8 million \”forcibly displaced persons.\” Concern over the living conditions of the forcibly displaced population should not kick in only after they reach the border of a high-income country.)

The US Rental Housing Market

The US rental housing market is in the middle of some major shifts, as outlines by the Joint Center for Housing Studies of Harvard University in its report \”America\’s Rental Housing 2020\” (January 2020). Here are some of the changes.

The \”rentership rate\”–the share of households renting–rose sharply from about 2004 to about 2016, before leveling out the last few years.

From 2000 to 2010, most of the growth in the housing rental market was coming from those with relatively lower incomes. But in the last decade, most of the growth in the rental housing market is coming from those with relatively higher incomes. \”But at 22 percent in 2019, rentership rates among households earning $75,000 or more are at their highest levels on record. Even accounting for overall income growth, rentership rates for households in the top decile jumped from 8.0 percent in 2005 to 15.1 percent in 2018 as their numbers more than doubled.\”

Rent is a big burden for many. The report looks at renters who are \”cost burdened,\” referring to those who pay more than 30% of their income in rent. \”Thanks to strong growth in the number of high-income renters, the share of renters with cost burdens fell more noticeably from a peak of 50.7 percent in 2011 to 47.4 percent in 2017, followed by a modest 0.1 percentage point increase in 2018. … Meanwhile, 10.9 million renters—or one in four—spent more than half their incomes on housing in 2018.\” Another big shift is that there is a rise in the \”cost-burdened renters\” in middle-income groups (say, $30,000-$75,000 per year in annual income), especially in  \”larger, high-cost metropolitan areas.\”

Vacancy rates for rentals are down, and are especially low for lower-cost, lower-quality rentals.

Meanwhile, rents are consistently rising faster than inflation.

The value of apartment properties has risen quickly, too.

Some background factors are also shifting. In the market for rental properties, stock of rentals rising in two areas  over last 15-20 year: single-family homes, and multi-family buildings with 20 or more units. These changes represent a shift in the rental housing market away from individual landlords and toward corporate ownership of rentals. In the area of single-family homes, for example, a number of institutional investors bought houses as rental properties in the aftermath of the drop in housing prices around 2010. The report notes:

Ownership of rental housing shifted noticeably between 2001 and 2015, with institutional owners such as LLCs, LLPs, and REITs accounting for a growing share of the stock. Meanwhile, individual ownership fell across rental properties of all sizes, but especially among buildings with 5–24 units. Indeed, the share of mid-sized apartment properties owned by individuals dropped from nearly two-thirds in 2001 to about two-fifths in 2015. Given that units in these structures are generally older and have relatively low rents, institutional investors may consider them prime candidates for purchase and upgrading. These changes in ownership have thus helped to keep rents on the climb.

Another shift is that many renters seem happier being renters, and less likely to view a rental as a short-term stop on the path to homeownership. Renters are staying in place longer, too. The report notes:

Changes in attitudes toward homeownership may lead some households to continue to rent later in life. The latest Freddie Mac Survey of Homeowners and Renters reports that the share of genX renters (aged 39–54 in 2019) with no interest in ever owning homes rose from 10 percent in March 2017 to 17 percent in April 2019. … Fully 75 percent of renters overall, and 72 percent of genX renters, stated that renting best fits their current lifestyle. …

[M]any renters are staying in the same rental units for longer periods. Between 2008 and 2018, the share of renters that had lived in their units for at least two years increased from 36 percent to 41 percent among those under age 35, and from 62 percent to 68 percent among those aged 35–64. Similarly, the National Apartment Association reported a turnover rate of just 46.8 percent in 2018— the lowest rate of move-outs since the survey began in 2000.

The US rate of homeownership has often been in the range of 63-65%, going up above that range during the housing boom around 2006, back down after that, and then rebounding a bit in the last few years.  Looking at long-run trends of aging, marriage/parenthood, and income, the US Department of Housing and Urban Development organized a pro-and-con symposium a few years ago on the question of whether the US homeownership rate will have fallen to less than 50% by 2050. Homeownership rates for young adults and for blacks are especially low. The US rate of homeownership was about average by international standards 20-25 years ago, but now is below the average. For earlier posts on these themes, see:

With regard to the broader social issue of rental prices being so high for so many people, the economic answer is straightforward. For those with very low incomes, help them afford the rent. But for the market as a whole, the way to get lower prices is to raise supply. For example, it\’s an interesting question as to why the individual landlord has been in such decline, and the extent to which this drop has been due to additional administrative, regulatory, and zoning costs being imposed at the state and local level. It seems to me possible that we are in the middle of a social shift in which many households at a variety of income levels put less emphasis on homeownership–which in turn means greater public attention to conditions of supply and demand in housing rental markets.

The Herfindahl-Hirschman Index: Story, Primer, Alternatives

It seems clear that the concept of what is now is called the Herfindahl-Hirschman Index was originated in 1945 by Albert O. Hirschman, who may be best-remembered today for his 1970 book Exit, Voice, Loyalty, discussing the options available to a dissatisfied group member.  However, the concept was then attributed to Orrin Herfindahl, who wrote five years later in 1950, and further confusion arose when it was sometimes referred to as a Gini index. Here\’s a primer and the the story.

The HHI, as it is often abbreviated, is a way of measuring industry concentration that is taught in every intro econ textbook. Assume that you have an industry where one big company has 50% of sales, three companies have 10% each, and 20 companies have 1% each. How can a researcher sum up the degree of concentration in this industry in a single number? 
One common approach is to use a \”concentration ratio.\” Pick a number of firms, like the top 4 or the top 8 in an industry. Add up their market share. Thus, the 4-firm concentration ratio in this example would be 80% and the 8-firm concentration ration would be 84%. 
But this concentration ratio approach has an obvious problem. An industry where the four top firms each had 20% of the market would have the same 4-firm concentration ratio of 80% as the example above. An industry where the top eight firms each had 10.5% of the market would have the same 8-firm concentration ratio of 84%. It would seem odd to say that these counterexamples, with a number of firms of roughly equivalent size, have the same concentration as the original example, where the largest firm has a full half of the market. 
Thus, the HHI uses a different calculation. First you square the market shares of existing firms; then you add them up. Thus, in the original example the HHI would be (50)2 + 3(10)2 + 20(1)2 = 2,820. The maximum value for an HHI would be 10,000, for a single firm with 100% of the market. An industry with a large number of very small firm that each have less than 1% of the market could have an HHI lower than 100. (In some cases, the HHI is described on a scale from 0 to 1, instead of 0  to 10,000, which is the numbers you get if the market shares are expressed with decimal points before being squared–thus, a 50% market share squared would be .25, not 2500.).

The idea of measuring industry concentration in this way originated with Albert O. Hirschman in his 1945 book National Power and the Structure of Foreign Trade. As he points out, there were already ways of measuring concentration, with the Lorenz curve and the Gini coefficient for measuring inequality of income especially well-known. But as Hirschman points out (p. 158):

In various instances, however, the number of elements in a series the concentration of which is being measured is an important consideration. This is so whenever concentration means \”control by the few,\” i.e., particularly in connection with market phenomena. Control of an industry by few producers can be brought about by an inequality of distribution of the individual output shares when there are many producers or by the fact that only few producers exist. One of the well-known conditions of perfect competition is that no individual seller should command an important share of the total market supply; this condition implies the presence of both relative equality of distribution and of large numbers. 

To put this point a little differently, imagine a market with all of equal-sized producers–maybe a small number like two or three or four, or a large number like 100 or 1,000.  A measure of equality like the measures used for income  would point out that all firms firms are of equal size. In contrast, a measure of concentration would emphasize that the number of firms matters, and that four firms means more competition than two, and 100 firms means more competition than four.  By squaring the market shares, Hirschman\’s measure gave greater weight to larger firms, thus emphasizing the idea that when it comes to concentration of an industry, large firms matter more.

In these ways, Hirschman\’s proposed measure of industry concentration was fundamentally different than the common measures of income equality. In fact, it was such a good idea that five years later, the idea was reinvented by Orris C. Herfindahl in his 1950 PhD dissertation, Concentration in the U.S. Steel Industry. Herfindahl mentions Hirschman\’s earlier work in a footnote.

There were surface differences between the Hirschman and Herfindahl measures. Hirschman\’s study was looking at concentrations of exports and imports of countries, both according to sources and destinations of international trade, while Herfindahl was applying the measure to the US steel industry. In addition, Herfindahl used (essentially) the measure described briefly above, while Hirschman took the square root of that measure.

But that\’s not how it evolved. Gideon Rosenbluth wrote  chapter called \”Measures of Concentration,\” which appeared in a 1955 NBER conference volume called Business Concentration and Price Policy (pp. 57-95).  Rosenbluth wrote in 1955:

But summary measures can be devised to measure concentration, just as they have been developed for other characteristics of size distributions. An ingenious measure of this type has been employed by O. C. Herfindahl in an investigation of concentration in the steel industry. It consists of the sum of squares of firm sizes, all measured as percentages of total industry size. This index is equal to the reciprocal of the number of firms if all firms are of the same size, and reaches its maximum value of unity when there is only one firm in the industry.

But a few years later in a 1961 essay, \” Remarks\” in Die Konzentration in der Wirtschaft, Schriften des Vereins fuir Sozialpolitik (New Series, Vol. 22, pp. 391-92), Rosenbluth wrote:

The first point I want to make causes me some embarrassment. There is a good deal of discussion int the background material about \”Herfindahl\’s Index.\” Actually, it is a mistake to ascribe this index to Herfindahl, and I believe my paper on measures of concentration, published in 1955, is the source of this mistake. I discovered later that the man who first proposed this index was Albert O.  Hirschman in his book \”National Power and the Structure of Foreign Trade,\” published by the University of California Press in 1945. Hirschman actually proposed the square root of what I call Herfindahl\’s Index, since this gives a more even distribution of values. 

Hirschman made an attempt to lay out this chronology in a short note appearing in the American Economic Review in 1964 (54: 5, September, p. 761). He points out that in a number of recent papers, the index was being referred to as a \”Gini index,\” although he had made some effort back in 1945, along with Herfindahl and Rosenbluth in later work, to be clear that it was not an index of equality. Hirschman writes: \”Upon devising the index I went carefully through the relevant literature because I strongly suspected that so simple a measure might already have occurred to someone. But no prior inventor was to be found.\” He also points out that Rosenbluth had originally attributed the index to Herfindahl.  Hirschman concludes on a wry note: \”The net result is that my index is named either after Gini who did not invent it at all or after Herfindahl who reinvented it. Well, it\’s a cruel world.\”

In the world of economics, this problem of attribution is sometimes called Stigler\’s law: \”No scientific discovery is named after its original discoverer.\” Of course, Steve Stigler was quick to point out in his 1980 article that he didn\’t discover his own law, either! In this case, it does not seem to me a grievous miscarriage of justice to have the names of both Hirschman and Herfindahl on the index, although Hirschman should probably come first.

Those who have read this far are probably the kind of people who would be interested in knowing that justification and analysis of concentration indexes is an ongoing task.  For getting up to speed, a useful starting point is the NBER working paper by  Paolo M. Adajar, Ernst R. Berndt, and Rena M. Conti, \”The Surprising Hybrid Pedigree of Measures of Diversity and Economic Concentration\” (November 2019, #26512).

The characterization of industry structure and industry concentration has long been a task facing empirical economic researchers, for it is widely believed that market structure, market behavior and various market performance outcomes are important interrelated phenomena. Although a number of alternative measures of market concentration are commonly used, such as the k‐firm concentration measure and the Herfindahl‐Hirschman index (HHI), their foundations in economic theory and statistics are limited and have not been developed extensively, leaving their unqualified use as measures of market power potentially vulnerable to the criticism of “measurement without theory”.

For example, perhaps it makes sense at some intuitive level to give greater weight to the market share of large firms when measuring concentration. But why square the market shares? Why not adjust them in some other way? Indeed, there is a set of alternative concentration measures using different weights going back to the work of Gideon Rosenbluth, and known as Rosenbluth/Hall‐Tideman (RHT) metrics. Adajar, Berndt, and Conti offer an analytical basis for the idea that squaring the market shares makes sense, based on a conceptually similar diversity measure from ecology. They write:

In this paper, we have traced the pedigree of the much‐used Herfindahl‐Hirschman (HHI) economic concentration index to the Simpson Index of diversity originally developed in ecology, where an identical calculation to the HHI is interpreted as the probability of two organisms randomly selected from a sample habitat belonging to the same species (analogous in economics to the probability a pair of randomly and independently selected products are being marketed by the same manufacturer). This probabilistic foundation of the HHI to some extent shields it from the allegation that the sum of squared shares calculation is arbitrary and unscientific, even as its links to market power and antitrust competition analysis remain ambiguous. 

For those wanting to dig deeper into alternative indexes of concentration, Adajar, Berndt, and Conti write:

We have also considered alternative proposed measures of concentrations, some of them mathematical generalizations of the HHI, others such as entropy originating from information theory in engineering and physics, another set that is developed axiomatically, and still others incorporating related concepts such as inequality and absolute population size. We have considered computational and interpretability aspects of the various concentration measures, and noted the extent to which they incorporate considerations not only of relative inequality such as the Gini coefficient and Lorenz curve, but also of absolute population size. 

Other things equal, markets with a large number of competitors suggest barriers to entry are limited, and therefore such markets could plausibly be expected to be competitive, other things equal. Therefore, to economists concentration metrics incorporating both variability/relative inequality and absolute population size considerations are preferable, for if one believes that economic performance outcomes depend not only on relative sizes but also on the  absolute number of competitors in a market, then one prefers a concentration measure that incorporates both features. The existing economic literature comparing the various concentration metrics on a priori statistical and axiomatic criteria appears to view the HHI and the closely related Rosenbluth/Hall‐Tideman (RHT) metrics most favorably. Choice between these two measures on a priori grounds is indeterminate, since the choice involves selection of weights and is therefore similar to choice among alternative index number formula in economic index number theory.

Writing the Intro to Your Economics Research Paper

If you do academic research, whether in economics or other fields, you need to give an honest answer to a basic question: \”Do you want readers for your research?\” If the answer is \”no,\” then read no further. If the answer is \”yes,\” then you should probably be thinking and working considerably more than the introduction to your paper. Barney Kilgore, a famous editor of the Wall Street Journal back in the 1950s and 1960s,  posted a motto in his office: “The easiest thing in the world for a reader to do is to stop reading.” If the intro doesn\’t make readers want to proceed, they will often take the easy course and turn to something else. 
Several writers of economics blogs have emphasized this theme recently. 
At the Center for Global Development blog, David Evans wrote \”How to Write the Introduction of Your Development Economics Paper\” (February 10, 2020). Evans writes:

You win or lose your readers with the introduction of your economics paper. Your title and your abstract should convince people to read your introduction. Research shows that economics papers with more readable introductions get cited more. The introduction is your opportunity to lay out your research question, your empirical strategy, your findings, and why it matters. Succinctly. …

Invest in your introduction. One reason that so many introductions in top journals have a similar pattern is that it’s clear: you tell the reader why the issue you studied is important, you tell them what you did, you tell them what you learned, and you tell them how it builds on what we already knew. You might tell them how it relates to policy or what the limitations of your work are. Interested readers can dive into the details of the paper, but good introductions give casual readers a clear sense of what they’ll get out of your paper. Your introduction is your kingdom. Rule it well.

Evans looks at 15 recent economic development papers published in prominent journals and discusses the ways in which  their introductions have a common pattern:  
  1. Motivate with a puzzle or a problem (1–2 paragraphs)
  2. Clearly state your research question (1 paragraph)
  3. Empirical approach (1 paragraph)
  4. Detailed results (3–4 paragraphs)
  5. Value-added relative to related literature (1–3 paragraphs)
  6. Optional paragraphs: robustness checks, policy relevance, limitations
  7. Roadmap (1 paragraph)

Evans also points to a couple of other recent discussions of introductions in economic research. For example, Keith Head presents his own view of \”The Introduction Formula,\” which starts like this:

1. Hook: Attract the reader’s interest by telling them that this paper relates to something interesting. What makes a topic interesting? Some combination of the following attributes makes Y something worth looking at.

  • Y matters: When Y rises or falls, people are hurt or helped.
  • Y is puzzling: it defies easy explanation.
  • Y is controversial: some argue one thing while other say another.
  • Y is big (like the service sector) or common (like traffic jams).

Things to avoid:

  • The bait and switch: promising an interesting topic but delivering something else, in particular, something boring.
  • “all my friends are doing it” : presenting no other motivation for a topic than that other people have written papers on it.

2) Question: Tell the reader what this paper actually does. Think of this as the point in a trial where having detailed the crime, you now identify a perpetrator and promise to provide a persuasive case. The reader should have an idea of a clean research question that will have a more or less satisfactory answer by the end of the paper. Examples follow below. The question may take two paragraphs. At the end of the first (2nd paragraph of the paper) or possibly beginning of the second (3rd paragraph overall) you should have the “This paper addresses the question” sentence.

Claudia Sahm at the Macromom blog spent last fall reading job market papers, and gives vent to her reactions in \”We need to talk MORE …\” (September 19, 2019). 

This post is for job market candidates. You need to spend more time editing your abstract and introduction. It will be worth more than your fourth robustness check. Promise. … Sadly, it is clear that economics departments and dissertation committees are NOT teaching their doctoral students how to communicate their research. … EVERY job market paper I read lacked a well-structured, well-written introduction and abstract. Many of these papers are from top schools and from native English speakers.

Sahm offers an intro structure as well, closely related to the others. She begins this way:

Structure of Introduction (in order):

THIS IS A VERY IMPORTANT PART OF YOUR PAPER

1) Motivation (1 paragraph)

  • Must be about the economics.
  • NEVER start with literature or new technique (unless econometrics).
  • Be specific and motivate YOUR research question.

2) Research question (1 paragraph)

  • Lead with YOUR question.
  • THEN set YOUR question within most relevant literature.
  • My favorite is an actual question: “My paper answers the question …”
  • Popular and acceptable: “My paper [studies/quantifies/evaluates/etc] …”

3) Main contribution (2-3 paragraphs, one for each contribution)

  • YOUR main contribution
  •             MUST be about new economic knowledge.
  •             Lead with YOUR work, then how it extends the literature.
  • New model, new data, new method, etc.:
  •             Can be second or third contribution.
  •             Tools are important, not most important.
  • Each paragraph begins with a sentence stating one of YOUR contributions.
  • THEN follow with three or four sentences setting YOUR contribution in literature.
  • Most important should be first (preferred) or last (sometimes most logical).
  • YOUR contributions are very important. Make them clear, compelling, and correct.
These posts caught my eye in part because they are a theme I have also tried to emphasize when talking about writing. A substantial part of my value-added as Managing Editor of the Journal of Economic Perspectives is sharpen up the introductions for papers. Most of the time, all the ingredients for a strong introduction are already there. But it\’s not unusual for an excellent lead-in or \”hook\” to be buried several pages into the paper, or even at the start of the conclusion, rather than right up front. It\’s not unusual to have intros that are either so long that only the author\’s parents will persevere to the end, or so short that the reader might just as well flip to a random page in the middle of the essay and start there. 

Here\’s a quote from an essay of my own, \”From the Desk of the Managing Editor,\” written on the occasion of the 100th issue of the Journal of Economic Perspectives back in Spring 2012. I wrote:   

Invest more time in the stepping-stones of exposition: introductions, opening paragraphs of sections, and conclusions. Introductions of papers are worth four times as much effort as they usually receive.The opening paragraph of each main section of a paper is worth three times as much effort as it usually receives. Conclusions are worth twice as much effort as they usually receive. This recommendation emphatically does not call for long introductions with a blow-by-blow overview each subsection of the paper to come. It doesn’t mean repeating the same topic sentences over and over again, in introduction and section headings and conclusion. It means making a genuine effort to attract the attention of the reader and let the reader know what is at stake up front, to signpost the argument as it develops, and to tell the reader the state of the argument at the end.

Telephone Switchboard Operators: Rise and Fall

In  1950, there were 342,000 telephone switchboard operators working for the Bell Telephone System and some independent operators, as well as another 1 million or so telephone switchboard operators who worked at private locations like office buildings, factories, hotels, and apartment buildings. Almost all of these switchboard operators were female. To put it another way, about one out of every 13 working women in 1950 were telephone operators.  But by 1984, national employment as an operator in the telecommunications industry was down to 40,000, and now it\’s less than 2,000 (according to the Bureau of Labor Statistics). 
 David A. Price sketches the  history of this rise and fall in \”Goodbye, Operator,\” appearing in Econ Focus (Federal Reserve Bank of Richmond, Fourth Quarter 2019, pp. 18-20). The story provokes some thoughts about the interaction of workers with new and evolving technologies. 
For more than a half-century from the late 19th century up to 1950, technology was creating jobs as telephone operators. From the phone company point of view, customers needed personal assistance and support if they were to incorporate this new technology into their lives. The workers with what we would now call the \”soft skills\” to provide this interface between technology and customers were reasonably well-rewarded. Price writes:

In the early decades of the industry, telephone companies regarded their business less as a utility and more as a personal service. The telephone operator was central to this idea, acting as an early version of an intelligent assistant with voice recognition capabilities. She got to know her 50 to 100 assigned customers by name and knew their needs. If a party didn\’t answer, she would try to find him or her around town. If that didn\’t succeed, she took a message and called the party again later to pass the message along. She made wake-up calls and gave the time, weather, and sports scores. During crimes in progress or medical emergencies, a subscriber needed only to pick up the handset and the operator would summon the police or doctors. …

While operators were not highly paid, the need to attract and retain capable women from the middle classes led telephone companies to be benevolent employers by the standards of the day — and in some respects, of any day. Around the turn of the century, the companies catered to their operators with libraries, athletic clubs, free lunches, and disability plans. Operators took their breaks in tastefully appointed, parlor-like break rooms, some with armchairs, couches, magazines, and newspapers. At some exchanges, the companies provided the operators with a community garden in which they could grow flowers or vegetables. In large cities, company-owned dormitories were offered to night-shift operators.

But even as the number of telephone operator jobs was growing rapidly, the job of being a telephone operator evolved dramatically. By 1950, the hyper-personal touch seems to have greatly diminished, and the telephone operator skills involved being able to handle \”the board,\” which involved plugging and unplugging several hundred connections per hour.

Looking back, the slow diffusion of automatic telephone switching technology seems a little puzzling. One reason is that digital technology differs in some fundamental ways from the earlier methods of automation. It\’s a standard story that the switchboard operators were replaced by automation. But why weren\’t they replaced by automation much earlier? Part of the answer seems to be that the automated telephone-switching systems in the first half of the 20th century did not actually display economies of scale. Price writes: 

With the electromechanical systems of the day, each additional customer was more, not less, expensive. Economies of scale weren\’t in the picture. To oversimplify somewhat, a network with eight customers needed eight times eight, or 64, interconnections; a network with nine needed 81. \”You were actually getting increasing unit costs as the scope of the network increased,\” says Mueller. \”You didn\’t get entirely out of the telephone scaling problem until digital switching in the 1960s.\”

This pattern of technology led to a situation where small-scale independent phone companies were more likely to use automated switching in the early part of the 20th century, while the giant Bell company continued to rely heavily on combinations of automatic switching with oversight from human switchboard operators–especially for long-distance calls.

More broadly, diffusion of technology is important in many contexts. Some well-known historical examples of important technologies that diffused slowly, over decades, include tractors and electricity. In the modern economy, a prominent pattern across many industries is that a few leading \”superstar\” firms are jumping farther ahead in terms of productivity, and their example of how to achieve such productivity gains is apparently not diffusing as quickly to other firms. There\’s an old economic lesson here, which is that for purposes of economic growth, just inventing a new technology is not enough: instead, many participants in the economy need to find ways to change their behavior in both simple and more fundamental ways to take full advantage of that technology. 
Back in 1964, even knowledgeable industry observers thought that the decline in telephone operators from about 1950 to 1960 was a one-time and temporary shift. Elizabeth Faulkner Baker wrote in  her 1964 book, Technology and Women\’s Work:

In sum, it is possible that the decline in the relative importance of telephone operators may be nearing an end. It seems that in the foreseeable future no machines will be devised that can completely handle person-to-person calls, credit-card calls, emergency calls, information calls, transient calls, messenger calls, marine and mobile calls, civilian defense calls, conference calls, and coin-box long-distance calls. Indeed, although an executive vice-president of the American Telephone and Telegraph Company has said that the number of dial telephones will reach almost 100 percent in the next few years and that there will be an increasing amount of customer dialing of long-distance calls: \”Yet we will still need about the same number of operators we need now, perhaps more.\”

Again the underlying notion was that the job of being a telephone operator would evolve, but not the need for people who could play a role of facilitating use of telecommunications technology easier for customers. When it comes to the specific job of telephone operator, this prediction was clearly off-base. (Although as a college student in the late 1970s and early 1980s, I remember the days when if you really needed to call home, you could just grab a public phone, dial zero for \”operator,\” and be answered by a person, from whom you would recite your home phone number and request a collect call.) But when thinking more broadly about the interaction between workers and technology, the central question remains as to what areas now and in the future will continue to benefit from human support at the interface between new technologies and ultimate users. 

Income-Contingent Student Loan Repayment

The US approach to student loans changed fundamentally a decade ago in 2010. The Congressional Budget Office describes how in \”Income-Driven Repayment Plans for Student Loans: Budgetary Costs and Policy Options\” (February 2020).

Between 1965 and 2010, most federal student loans were issued by private lending institutions and guaranteed by the government, and most student loan borrowers made fixed monthly payments over a set period—typically 10 years. Since 2010, however, all federal student loans have been issued directly by the federal government, and borrowers have begun repaying a large and growing fraction of those loans through income-driven repayment plans.

Under the most popular income-driven plans, borrowers’ payments are 10 or 15 percent of their discretionary income, which is typically defined as income above 150 percent of the federal poverty guideline. Furthermore, most plans cap monthly payments at the amount a borrower would have paid under a 10-year fixed-payment plan. … Borrowers who have not paid off their loans by the end of the repayment period—typically 20 or 25 years—have the outstanding balance forgiven. (Qualifying borrowers may receive forgiveness in as little as 10 years under the Public Service Loan Forgiveness, or PSLF, program.)

There\’s a strong positive case for income-contingent loans. Because they spread out the payments over time and link them to income, the annual burden of those payments is less likely to overwhelm borrowers. Thus, students from families with limited financial resources may be more willing to use such loans to attend college, and income-contingent loans are less likely to  lead to default. 

But there are tradeoffs, too. Many students presumably have some information, based on their abilities and career plans, about whether their future career is likely to be higher- or lower-paid–or whether they may be planning to leave the labor force for certain periods of time (perhaps to become a parent). With an income-contingent loan, a lower paid career means lower annual payments and the possibility that a lot of the debt will be forgiven by 25 years after graduation, when many former students will presumably be in their late 40s and still have a number of prime earning years remaining in their careers. Similarly, students who are borrowing especially large sums of money is more likely to benefit from having any remaining debts forgiven after 25 years. Thus, CBO writes:
Among borrowers who had taken out
direct loans for undergraduate study, the share enrolled
in income-driven plans grew from 11 to 24 percent.
Among those who had taken out direct loans for graduate
study (and for undergraduate study as well, in many
cases), the share grew from 6 to 39 percent.
The volume of loans in income-driven plans has grown
even faster than the number of borrowers because borrowers
with larger loan balances are more likely to select
such plans. In particular, graduate borrowers have much
larger loan balances, on average, and are more likely
to enroll in income-driven plans than undergraduate borrowers. CBO estimates that about 45 percent of the
volume of direct loans was being repaid through income- driven
plans in 2017, up from about 12 percent in 2010.
There are a variety of ways to calculate the extent to which student loans are subsidized by the government. One approach used the CBO takes into account what a private lender would have charged for making a loan with these similar risk of default. By this estimate, government student loan made with a average fixed-payment plan received a government subsidy of 9.1%, while student loans made with an income-contingent plan receive a government subsidy of 43.1%. 
I\’m fine with some level of public subsidy to higher education, both by states and by the federal government,  but it\’s always useful to consider whether how the subsidy is administered and whether the form of the subsidy is encouraging certain behaviors more than others. The CBO discusses various possible student loan reforms: for example, removing the annual caps on repayment for income-contingent plans (so the borrower would just repay 10-15% of discretionary income without cap on the level of payment), or redefining what \”discretionary income\” means, requiring all student loans to be income-contingent, or require all student loans to be fixed payment. 
The CBO offers a discussion of some related but slightly different models of income-contingent repayment in Australia and the United Kingdom (footnotes omitted): 

Australia and the United Kingdom have income-driven repayment plans for student loans that are similar to those in the United States. However, unlike borrowers in the United States, borrowers in those countries do not have a choice of repayment plans: All are required to enroll in income-driven plans, which are administered in coordination with the national tax authorities. That design keeps borrowers with low earnings or large balances from enrolling in income-driven plans at greater rates than other borrowers who would receive less benefit.

Australia was among the first countries to adopt an income-driven student loan repayment system, in 1989. Borrowers pay a percentage of their annual income above a threshold. For example, borrowers who began repaying their loans in the 2018–2019 academic year paid between 2 and 8 percent of income over 51,957 Australian dollars (roughly $38,864 in 2018 U.S. dollars). The repayment rate is based on a progressive formula, such that borrowers pay a larger portion of their income as their earnings increase. Payments are collected by the Australian Tax Office, and borrowers can elect to have their student loan payments withheld from their wages like income taxes. Unlike in the United States, unpaid balances are not forgiven.

The United Kingdom adopted an income-dependent repayment policy for all student loan borrowers in 1998. As in the Australian and U.S. systems, borrowers pay a percentage of their income above a threshold. Among those who began repaying their loans in the 2018–2019 academic year, undergraduate borrowers owed 9 percent of their income over £25,000 (roughly $33,250 in 2018 U.S. dollars), and graduate borrowers owed 6 percent of their income over £21,000 (roughly $28,000 in 2018 U.S. dollars). Loan balances are forgiven after a period that depends on borrowers’ age or when their last loan was issued—once the borrower is 65 years old, after 25 years, or, for more recent loans, after 30 years. Forgiven balances are not treated as taxable income. As in Australia, payments are collected by the national tax authority—Her Majesty’s Revenue and Customs.

In Australia and the United Kingdom, the student loan repayments are done through the tax code: \”In the United States, by contrast, student loan payments are collected by private servicers without assistance from the Internal Revenue Service.\”

The Transformation of Federal Spending

When the president submits a proposed budget each year, I don\’t pay much attention to the details. It will inevitably be declared \”dead on arrival\” by the opposition party, and what actually happens will be hashed out over the next year. However, I do often spend some time turning the pages of some supplementary budget documents, including the Historical Tables and the Analytical Perspectives volumes. For example, Table 6.1 of the Historical Tables is called \”Composition of Outlays: 1940-2025.\” Here\’s some data from that table, showing a dramatic but not-much-discussed transformation of federal spending in the last 60 years.

A few things to notice:

1) The share of federal spending  going to \”Defense\” has plummeted in the last 60 years, falling from more than one-half of total federal spending in 1960 to less than one-sixth of federal spending at present.

2) The share of federal spending going to \”Payments to individuals\” has shot up over the last 60 years, from 26% of all federal spending back in 1960 to over 70% at present. The rise in federal spending on Social Security and on health care programs plays a huge role here.

3) The last two columns of the figure show the breakdown in federal \”Payments for individuals,\” that is, how much went directly to individuals and how much went via grants to state and local governments. Back in 1960, roughly 1 out of every 35 federal dollars spent went to state and local governments for payments to individuals; now, it\’s about 1 out of every 8 federal dollars spent goes to this purpose.

4) Back in 1960, if you add defense and payments to individuals, you get 78.5% of all federal spending. In 2020, if you add defense and payments to individuals, you get 85.4% of all federal spending. In other words, the share of federal outlays on everything else is getting squashed.

In some ways, this table represents what used to be called the \”peace dividend\”–that is, a movement of government spending from defense to non-military uses.

This fundamental transformation in federal spending so that it focuses on payments to individuals just sort of happened, but it seems to to me to represent a shift in how most Americans now see the central purpose of the federal government. One occasionally hears the battle cry, \”If we can put an astronaut on the moon, then …\” But landing a person on the moon is an old-fashioned activity from the earlier spending patterns of the federal government, because it\’s not about payments to individuals. If you wonder why it seems as if there are so few federal financial resources for areas like R&D, science, roads and bridges, water supply and sewage, cybersecurity, diplomacy, education at all levels, environmental monitoring, and other areas, one relevant answer is that by far the primary purpose of the federal government has become to make payments to individuals.

For a couple of other posts from a few years back on this broad theme, see:

Winter 2020 Journal of Economic Perspectives Available Online

I am now in my 34th year as Managing Editor of the Journal of Economic Perspectives. The JEP is published by the American Economic Association, which decided about a decade ago–to my delight–that the journal would be freely available on-line, from the current issue back to the first issue. You can download it various e-reader formats, too. Here, I\’ll start with the Table of Contents for the just-released Winter 2020 issue, which in the Taylor household is known as issue #131. Below that are abstracts and direct links for all of the papers. I will probably blog more specifically about some of the papers in the next week or two, as well.

___________________
Symposium on Economics of India

\”Dynamism with Incommensurate Development: The Distinctive Indian Model,\” by Rohit Lamba and Arvind Subramanian
India\’s sequencing of economic and political development has been unusual. In contrast to the West and more recently East Asia, democratization has preceded economic growth. Notwithstanding its unique path, India has grown substantially over the last four decades, pulling hundreds of millions out of poverty. The pace, durability, and stability of economic growth has been matched by few countries in the post-war period. This dynamism, though, has not been matched by development in several dimensions: a structural transformation that has skipped high-productivity manufacturing despite surplus labor, an increased spatial divergence in income despite integration in internal markets, limited convergence in education and other social metrics across castes but divergence across religions, a deep societal preference for sons that is associated with poor outcomes for women and high levels of stunting amongst children, and an environmental degradation that is severe for its level of income. The paper speculates on two immediate challenges: reviving dynamism when human capital development remains weak and the financial system is impaired and accelerating development when state capacity remains limited.
Full-Text Access | Supplementary Materials

\”Why Does the Indian State Both Fail and Succeed?\” by Devesh Kapur
The Indian state\’s performance spans the spectrum from woefully inadequate, especially in core public goods provision, to surprisingly impressive in successfully managing complex tasks and on a massive scale. It has delivered better on macroeconomic rather than microeconomic outcomes, where delivery is episodic with inbuilt exit than where delivery and accountability are quotidian and more reliant on state capacity at local levels, and on those goods and services where societal norms on hierarchy and status matter less than where they are resilient. The paper highlights three reasons for these outcomes: under-resourced local governments, the long-term effects of India\’s \”precocious\” democracy, and the persistence of social cleavage. However, claims that India\’s state is bloated in size and submerged in patronage have weak basis. The paper concludes by highlighting a reversal of past trends in that state capacity is improving at the micro level even as India\’s macro performance has become more worrisome.
Full-Text Access | Supplementary Materials

\”The Great Indian Demonetization,\” by Amartya Lahiri
On November 8, 2016, India demonetized 86 percent of its currency in circulation. The stated objectives of the move were to seize undeclared income, to destroy counterfeit currency, to speed up formalization of the economy, and to increase the tax base. I find that the evidence over the subsequent three years suggests that the move had limited success in achieving its stated objectives. Disaggregated data suggests that demonetization did have appreciable costs in terms of lost jobs and output. However, the output costs appear to have been temporary.
Full-Text Access | Supplementary Materials

Symposium on Assimilation of Refugees

\”Asylum Migration to the Developed World: Persecution, Incentives, and Policy,\” by Timothy J. Hatton
The European migration crisis of 2015-2016 and the migrants from Central America gathering on the US border since 2017 have created headlines and presented challenges for Western governments. In this paper, I examine the trends in, and determinants of, the number of asylum seekers applying for refugee status in the developed world. This must be understood against the background of an international policy regime that evolved in response to refugee crises and geo-political imperatives. While policy has drawn a sharp distinction between refugees and other immigrants, that difference has become increasingly blurred among asylum migrants. In this light, I examine the interplay between migration pressures, public opinion, and asylum policies in recent decades.
Full-Text Access | Supplementary Materials

\”The Labor Market Integration of Refugee Migrants in High-Income Countries,\” by Courtney Brell, Christian Dustmann and Ian Preston
We provide an overview of the integration of refugees into the labor markets of a number of high-income countries. Discussing the ways in which refugees and economic migrants are differently selected and so might be expected to perform differently in a host country\’s labor market, we examine employment and wages for these groups over time after arrival. There is significant heterogeneity between host countries, but in general, refugees experience persistently worse outcomes than other migrants. While the gaps between the groups can be seen to decrease on a timescale of a decade or two, this is more pronounced in employment rates than it is in wages. We also discuss how refugees are distinct in terms of other factors affecting integration, including health, language skills, and social networks. We provide a discussion of insights for public policy in receiving countries, concluding that supporting refugees in early labor market attachment is crucial.
Full-Text Access | Supplementary Materials

Symposium on Electricity in Developing Countries

\”Does Household Electrification Supercharge Economic Development?\” by Kenneth Lee, Edward Miguel and Catherine Wolfram
In recent years, electrification has reemerged as a key priority in low-income countries, with a particular focus on electrifying households. Yet the microeconomic literature examining the impacts of electrifying households on economic development has produced a set of conflicting results. Does household electrification lead to measurable gains in living standards or not? Focusing on grid electrification, we discuss how the divergent conclusions across the literature can be explained by differences in methods, interventions, potential for spillovers, and populations. We then use experimental data from Lee, Miguel, and Wolfram (2019)—a field experiment that connected randomly selected households to the grid in rural Kenya—to show that impacts can vary even across individuals in neighboring villages. Specifically, we show that households that were willing to pay more for a grid electrification may gain more from electrification compared to households that would only connect for free. We conclude that access to household electrification alone is not enough to drive meaningful gains in development outcomes. Instead, future initiatives may work better if paired with complementary inputs that allow people to do more with power.
Full-Text Access | Supplementary Materials

\”The Consequences of Treating Electricity as a Right,\” by Robin Burgess, Michael Greenstone, Nicholas Ryan and Anant Sudarshan
This paper seeks to explain why billions of people in developing countries either have no access to electricity or lack a reliable supply. We present evidence that these shortfalls are a consequence of electricity being treated as a right and that this sets off a vicious four-step circle. In step 1, because a social norm has developed that all deserve power independent of payment, subsidies, theft, and nonpayment are widely tolerated. In step 2, electricity distribution companies lose money with each unit of electricity sold and in total lose large sums of money. In step 3, government-owned distribution companies ration supply to limit losses by restricting access and hours of supply. In step 4, power supply is no longer governed by market forces and the link between payment and supply is severed, thus reducing customers\’ incentives to pay. The equilibrium outcome is uneven and sporadic access that undermines growth.
Full-Text Access | Supplementary Materials

Articles

\”Solo Self-Employment and Alternative Work Arrangements: A Cross-Country Perspective on the Changing Composition of Jobs,\” by Tito Boeri, Giulia Giupponi, Alan B. Krueger and Stephen Machin
The nature of self-employment is changing in most OECD countries. Solo self-employment is increasing relative to self-employment with dependent employees, often being associated with the development of gig economy work and alternative work arrangements. We still know little about this changing composition of jobs. Drawing on ad-hoc surveys run in the UK, US, and Italy, we document that solo self-employment is substantively different from self-employment with employees, being an intermediate status between employment and unemployment, and for some, becoming a new frontier of underemployment. Its spread originates a strong demand for social insurance which rarely meets an adequate supply given the informational asymmetries of these jobs. Enforcing minimum wage legislation on these jobs and reconsidering the preferential tax treatment offered to self-employment could discourage abuse of these positions to hide de facto dependent employment jobs. Improved measures of labor slack should be developed to acknowledge that, over and above unemployment, some of the solo self-employment and alternative work arrangements present in today\’s labor market are placing downward pressure on wages.
Full-Text Access | Supplementary Materials

\”The Economics of Maps,\” by Abhishek Nagaraj and Scott Stern
For centuries, maps have codified the extent of human geographic knowledge and shaped discovery and economic decision-making. Economists across many fields, including urban economics, public finance, political economy, and economic geography, have long employed maps, yet have largely abstracted away from exploring the economic determinants and consequences of maps as a subject of independent study. In this essay, we first review and unify recent literature in a variety of different fields that highlights the economic and social consequences of maps, along with an overview of the modern geospatial industry. We then outline our economic framework in which a given map is the result of economic choices around map data and designs, resulting in variations in private and social returns to mapmaking. We highlight five important economic and institutional factors shaping mapmakers\’ data and design choices. Our essay ends by proposing that economists pay more attention to the endogeneity of mapmaking and the resulting consequences for economic and social welfare.
Full-Text Access | Supplementary Materials

\”Emi Nakamura: 2019 John Bates Clark Medalist,\” by Janice Eberly and Michael Woodford
Emi Nakamura is the 2019 recipient of the John Bates Clark Medal from the American Economic Association. Emi is an empirical macroeconomist whose work has studied the nature of price-setting and the effects of monetary and fiscal policies, among other issues, and has been notable for using less aggregated data, while addressing central questions about the macroeconomy. We describe Emi\’s key research contributions, with particular emphasis on those identified by the Honors and Awards Committee of the American Economic Association in her Clark Medal citation, as well as her broader contributions to the field of economics.
Full-Text Access | Supplementary Materials

\”Recommendations for Further Reading.\” by Timothy Taylor
Full-Text Access | Supplementary Materials

South Africa: Mired in Stagnation

South Africa is a political miracle: a country which managed to negotiate its way peacefully to ending apartheid rule through a democratic election in 1994. After that, South Africa\’s economy mostly grew up until the Great Recession, but it has now been largely stagnant for a decade. Here\’s a figure from the IMF showing per capita GDP in South Africa since 1993. (Values are index numbers set relative to a value of 100 for 2014.)
Chart 4
Other IMF stats show that overall unemployment is 25% and youth unemployment exceeds 50%. A combination of high-income cities like Johannesburg and Pretoria and low-income urban slums and rural areas have also combined to make South Africa one of countries with highest levels of income inequality.
Chart 5

The IMF just completed an evaluation of South Africa\’s economic situation a few weeks ago. Also, in August 2019, South Africa\’s Treasury department published a list of suggested reforms. What are some themes that emerge about what has gone wrong and what needs to be done?

1) Many of South Africa\’s state-owned enterprises (SOEs) seem to be in disastrous shape, and the biggest disaster is ESKOM, the electricity public utility. The IMF writes:

Most SOEs face elevated costs arising from bloated wage bills and costly procurement. Cost increases have outstripped tariff increases and cuts in capital expenditure, and debt service burden has risen, keeping SOEs net cash flows negative. Eskom is by far the largest SOE and its position is particularly critical, with an operational balance insufficient to service its high debt—around 10 percent of GDP. … Corruption, delays in debt-financed investments, and expensive procurement have generated cost-overruns and left Eskom reliant on outdated plants vulnerable to breakdowns (the average age of the fleet is 37 years). 

Other especially problematic state-owned enterprises are South African Airways and the passenger railway company PRASA.
2) In substantial part because of subsidies to the state-owned enterprises, South Africa\’s government is already running large fiscal deficits–which of course makes it difficult to focus resources on social spending. The IMF writes: 

In the early and mid-2000s, annual output growth averaged about 4 percent, fiscal deficits turned to small surpluses, and public debt declined to 27 percent of GDP. By contrast, starting in the late-2000s, private investment’s contribution to growth fell considerably, and total factor productivity (TFP) growth became negative, dampening growth to slightly above 1 percent. Following the countercyclical easing at the time of the global financial crisis, fiscal deficits have remained wide at around 4½ percent of GDP, more than doubling public debt to close to 60 percent of GDP. 

IMF projections are for the annual deficits to get higher, in substantial part because of promised subsidies to the state-owned enterprises, but also for paying interest on past borrowing–much of which is paid to international investors outside South Africa. 
3) Product markets in South Africa strongly favor large incumbent firms, and choke off new competitors. The IMF again: 

Several economic sectors, including manufacturing and banking, are dominated by a handful of big players with significant market power. High concentration has inhibited the emergence of smaller firms, which are powerful job creators in other EMs [emerging markets]. SMEs [small and medium enterprises] have shrunk in importance relative to large firms in the past decade. Staff analysis suggests that rising input costs and markups are associated with declining economic growth. This is clearly the case of large SOEs that pass-on high costs to businesses, thus sustaining elevated price
levels and reducing the economy’s competitiveness. Firms subject to restrictive procurement and labor regulations also suffer from high costs and low productivity. A distributional analysis suggests that the poor are more affected as they face both fewer employment opportunities and higher prices.

One striking comparison looks at \”mark-ups\” across countries–that is, how much are the prices that countries charge above marginal cost of production? Here\’s a study that looks at the change in mark-ups over time, compared to the rise in marginal costs.

Here\’s another figure looking at concentration in the retail industry, which is often an industry that can be friendly to new entrants. South African retail is far more concentrated than the comparison countries.

4) South Africa is experiencing a labor market mismatch, where much of the job growth is in higher-skilled jobs and much of the unemployment is among lower-skilled workers. Moreover, requiring that state-owned enterprises pay high wages means that these firms try not to hire lower-skilled workers. As the IMF writes:

South Africa has a higher level of unemployment and lower labor force participation than both regional and emerging economies. With skill mismatches and economic growth tilted toward the most sophisticated sectors (finance, information technology, and specialized business services), the bulk of job creation benefits high-skilled workers as opposed to low-skilled workers and labor-intensive industries including agriculture, tourism, and manufacturing. Further, labor cost increases exceed productivity improvements—largely a reflection of the centralized wage bargaining that transmits labor cost increases to the rest of the economy—systematically keeping demand for labor (including for new entrants) significantly below employment needs. Firm closures further worsen the dynamics. … Regulatory constraints that inhibit firms’ ability to hire on a need basis limit employment opportunity, particularly for the inexperienced and the youth. To justify payment of centrally bargained wage levels, firms prefer to hire skilled and experienced workers, who represent a small percentage of the population.

This is why one of the main recommendations from the report of South Africa\’s Treasury focuses on \”prioritizing labour-intensive growth in sectors such as agriculture and services, including tourism.\”

5) In the long run, a key element for South Africa will be its education system and other methods of getting future employees the skills they need. South Africa\’s education system is not performing well. From the IMF, here\’s a figure showing spending on education on the horizontal axis, and performance on the international PISA tests on the vertical axis. South Africa\’s performance lags far behind other countries with a similar level of spending.

The South African Treasury, before starting its discussion of reforming state-owned enterprises and all the rise, first emphasizes the importance of education in its  report:

However, any attempt to raise South Africa’s potential growth rate must include progress on the fundamental building blocks of long-run sustainable growth. First, there must be an emphasis on improving educational outcomes throughout the educational life-cycle … The South African education system, which other countries have used to promote equality of opportunity, perpetuates inherited socio-economic disadvantage: if your parents are poor, the chances of your being poor are about 90 per cent (Finn et al. 2016). The lack of a transformative education system is a key factor in this persistence. Our educational outcomes are poor, even when compared to other less well-resourced countries in the region. This is a major driver of intergenerational inequality and inhibits the inclusivity of growth and global competitiveness. Since the highest return to human capital investments are associated with the earliest interventions, an educational life-cycle approach must include a strong emphasis on early childhood development, which has demonstrated the ability to: (i) improve long-term health outcomes (Campbell et al. 2014); (ii) boost earnings by as much as 25 per cent (Gertler et al. 2014); and (iii) generate a rate of return on investment of 7 to 10 per cent through better outcomes in education, health, and productivity (Heckman et al. 2010). Evidence of inadequate
teacher content knowledge (see Venkat and Spaull 2015) and significant reading deficits in primary schools (see Spaull and Kotze 2015) points to the need for a comprehensive reading plan for primary school learners drawing on successful experiences such as the provision of reader anthologies.

Second, we need to continue to implement youth employment interventions, including training opportunities that remove barriers to entering the labour market and apprenticeships based on close cooperation between technical, vocational, and other training institutions and the private sector to ensure that training needs are demand-driven (Bhorat et al. 2014). Investing in the capabilities and educational and health outcomes of young people is unlikely to yield a dividend unless the youth are absorbed by labour markets (Mlatsheni 2014). 

6) A separate report from the IMF also added a discussion of the problem of crime in South Africa. For illustration, here\’s the homicide rate in South Africa, which has dropped a bit since 2000 but remains very high.

Surveys of business in South Africa see the crime rate as one of the biggest problems.

International comparisons of businesses also suggest that crime is a particular problem in South Africa.

Overall, the in-depth discussion of policy steps by South Africa\’s Treasury sums up this way:

These growth reforms are organized according to the following themes: (i) modernizing network industries; (ii) lowering barriers to entry and addressing distorted patterns of ownership through increased competition and small business growth; (iii) prioritizing labour-intensive growth in sectors such as agriculture and services, including tourism; (iv) implementing focused and flexible industrial and trade policy; and (v) promoting export competitiveness and harnessing regional growth opportunities. We estimate the economy-wide impact of the proposed interventions over time based on when they can realistically be implemented, and find they can raise potential growth by 2–3 percentage points and create over one million job opportunities.

There used to be a hope that South Africa\’s economy could provide provide both an example and an engine for lifting standards of living across sub-Saharan Africa. Back around 1995, for example, South Africa had about 7% of the total population of sub-Saharan Africa, but the GDP of South Africa was about one-third of total GDP for the region. By 2018, however South Africa had about 5% of the population of sub-Saharan Africa, but 21% of the total GDP of sub-Saharan Africa. The blunt truth seems to be that South Africa\’s government has not delivered in the last decade on many important outcomes: not in education and training, running state-owned enterprises, providing a climate for new businesses to start, not in reducing inequality, getting crime under control, or keeping government debt manageable.

Global R&D:The Stagnant US Position

Research and development isn\’t enough by itself. . New discoveries needs to be brought into the economy in the form of new companies, new products, and new jobs. But it matters. A long-standing concern among economists is that a market-oriented economy may tend to underinvest in R&D, because even with intellectual property like patents and trade secret law, an innovator captures on average only a modest share of the social benefits from R&D. Thus, a variety of estimates suggest that the social return from more R&D spending is 60%, or that the US should be aiming over time to double its R&D spending

In a global context, the US efforts to invest in R&D look stagnant. Here are some figures from a January 2020 report of the National Science Foundation and the National Science Board, called \”The State of U.S. Science & Engineering 2020,\” 

This figure shows total domestic spending on R&D (government, private-sector, nonprofits). The US leads the way. The purple line is China, which surpassed Japan about a decade ago and Europe about five years ago.
If you look at the growth rate of R&D from 2000-2017, you can see that the China is the most obvious area catching up to the US, but certainly not the only one.  
As a result of these ongoing shifts, the US used to be the preeminent region for R&D spending. But now, the the primary geographical  home of most global R&D is the East and South Asia region.
One issue is that the US spends about 2.5% of GDP on R&D in most years, give or take a few tenths of a percent. Germany, Japan, and South Korea spend more. China spends a lower share of GDP on R&D, but the share has been rising and of course China\’s GDP has also been growing quite rapidly in recent decades. 


In the US, government spending on R&D has been pretty flat for the last decade or so; instead, it has been business spending on R&D leading the way. Business involvement in R&D spending is clearly a good thing, because it suggests that business are seeing ways to bring new discoveries into the day-to-day operations. However, there are also concerns that when it comes to research and development, business can be heavier on the \”D\” and lighter on the \”R.\” The giant corporate laboratories of the past like AT&T\’s Bell Labs, Xerox\’s Palo Alto Research Center, IBM\’s Watson Labs, and DuPont\’s Purity Hall have diminished in scope or closed altogether. Relatively few modern companies finance research in basic science, or in long-horizon, high-risk projects that may turn out to be central to whole new industries.


When confronted with these kinds of issues, a standard US response is to raise suspicions that the quality of R&D being done in China or across other countries of east and south Asia may not be very high. It\’s of course hard to measure the quality of research, but one method is to look at whether research articles are heavily cited by follow-up research. The NSF report explains: 

The impact of an economy’s S&E [science & engineering] research can be compared through the representation of its articles among the world’s top 1% of cited articles, normalized to account for the size of each country’s pool of S&E publications. This normalized value is referred to as an index and is similar to a standardized score. For example, if a country’s global share of top articles is the same as its global share of all publication output, the index is 1.0. The U.S. index was 1.9 in 2016, meaning that its share of the top 1% of cited articles was about twice the size of its share of total S&E articles (Figure 22). Between 2000 and 2016, the EU index of highly cited articles grew from 1.0 to 1.3 while China’s index more than doubled, from 0.4 to 1.1, indicating rising impact from both areas.

In short, this metric suggests that US research efforts are more likely to be in the top 1% of the research literature. It also suggests that the gap is closing.

I often see proposals for the US to focus on building its transportation infrastructure, like roads, bridges, railroads, and airports. One can certainly make a reasonable case for such investments. But I also suspect that transportation spending is not going to be the main driver for leading global economies for the remaining four-fifths of the 21st century. A serious national conversation on how best to expand US R&D spending substantially is overdue.