Interview with Elhanan Helpman

Douglas Clement has a characteristically excellent \”Interview with Elhanan Helpman\” in the December 2012 issue of The Region, published by the Federal Reserve Bank of Minneapolis. The main focuses of the interview are \”new growth theory, new trade theory and trade (and policy) related to market structure.\” Here\’s Helpman:

On the origins of \”new trade theory\”

\”When I was a student, the type of trade theory that was taught in colleges was essentially based on Ricardo’s 1817 insight, Heckscher’s 1919 insights and then Ohlin’s work, especially as formulated by [Paul] Samuelson later on. This view of trade emphasized sectoral trade flows. So, one country exports electronics and imports food, and another country exports chemicals and imports cars. This was the view of trade. The whole research program was focused on how to identify features of economies that would allow you to predict sectoral trade flows. In those years, there was actually relatively little emphasis on Ricardian forces, which deal with relative productivity differences across sectors, across countries, and there was much more emphasis on differences across countries in factor composition. …

Two interesting developments in the 1970s triggered the new trade theory. One was the book by Herb Grubel and Peter Lloyd in which they collected a lot of detailed data and documented that a lot of trade is not across sectors, but rather within sectors. Moreover, that in many countries, this is the great majority of trade. So, if you take the trade flows and decompose them into, say, the fraction that is exchanging [within sectors] cars for cars, or electronics for electronics, versus [across sectors] electronics for cars, then you find that in many countries, 70 percent—sometimes more and sometimes less—would have been what we call intra-industry trade, rather than across industries….

The other observation that also started to surface at the time was that when you looked at trade flows across countries, the majority of trade was across the industrialized countries. And these are countries with similar factor compositions. There were obviously differences, but they were much smaller than the differences in factor composition between the industrialized and the less-developed countries. Nevertheless, the amount of trade between developed and developing countries was much smaller than among the developed countries.

This raised an obvious question. If you take a view of the world that trade is driven by [factor composition] differences across countries, why then do we have so much trade across countries that look pretty similar? …

Then, on the theoretical front, monopolistic competition was introduced forcefully by both Michael Spence in his work, which was primarily about industrial organization, and [Avinash] Dixit and [Joseph] Stiglitz in their famous 1977 paper. These studies pointed out a way to think about monopolistic competition in general equilibrium. And trade is all—or, at least then, was all—about general equilibrium.

So combining these new analytical tools with the empirical observations enabled scholars to approach these empirical puzzles with new tools. And this is how the new trade theory developed.\”

On trade and inequality: an inverted U-shape?

\”Most of the work on trade and inequality in the neoclassical tradition was focused on inequality across different inputs. So, for example, skilled workers versus unskilled workers, or capital versus labor, and the like. There was a lot of interest in this issue with the rise in the college wage premium in the United States, which people then found happened also in other countries, including less-developed countries. …The other interesting thing that happened was that labor economists who worked on these issues also identified another source of inequality. They called it “residual” wage inequality, which is to say, if you look at wage structures and clean up wage differences across people for differences in their observed characteristics, such as education and experience, there is a residual wage difference, and wages are still quite unequal across people. In fact, it’s a big component of wage inequality.

Our aim in this research project, which has lasted now for a number of years, was to try to see the extent to which one can explain this inequality in residual wages by trade. It wasn’t an easy task, obviously, but the key theoretical insight came from the observation that once you have heterogeneity in firm productivities within industries, you might be able to translate this also into inequality in wages that different firms pay. …We tried to combine these insights, labor market frictions on the one hand and trade and firm heterogeneity on the other …We managed eventually, after significant effort, to build a model that has this feature but also maintains all the features that have been observed in the data sets previously. It was really interesting that the prediction of this model was that if you start from a very closed economy and you reduce trade frictions, then initially inequality is going to rise. However, once the economy is open enough, in a well-defined way, then additional reductions in trade friction reduce the inequality. Now, it is not clear that this is a general phenomenon, but our analytical model generated it. … [I]t’s an inverted U shape …\”

On how the gains from research and development spill across national borders

\”We computed productivity growth in a variety of OECD [Organisation for Economic Co-operation and Development] countries in this particular paper. We constructed R&D capital stocks for countries … Then we estimated the impact of the R&D capital stocks of various countries on their trade partners’ productivity levels. And we found substantial spillovers across countries. Importantly, in those data, these spillovers were related to the trade relations between the countries. And we showed that you gain more from the country that does more R&D if you trade with this country more. This produced a direct link between R&D investment in different countries and how trading partners benefit from it. …

 The developing countries don’t do much R&D. The overwhelming majority of R&D is done in industrialized countries, and this was certainly true in the data set we used at the time. So we asked the following question: If you look at developing countries, they trade with industrialized countries. Do they gain from R&D spillovers in the industrialized countries, and how does that gain depend on their trade structure with these industrialized countries? We showed empirically that the less-developed countries also benefited from R&D spillovers. And the more they trade with industrialized countries that engage heavily in R&D, the more they gain. …

One of the important findings—which analytically is almost obvious, but many people miss it—is that, if you have a process that raises productivity, such as R&D investment, then this also induces capital accumulation. So then, the contribution of R&D to growth comes not only from the direct productivity improvement, but also through the induced accumulation of capital. When you simulate the full-fledged model with these features, you get a very clear decomposition. You can see how much is attributable to each one.

With this, we could handle a relatively large number of countries in all different regions of the world, and [run some] interesting simulations. We could ask, for example, if all the industrialized countries raise their investment in R&D by an additional half percent of gross domestic product, who is going to benefit from it? Well, you find that the industrialized countries benefit from it a lot, but the less-developed countries benefit from it also a lot. It was still the case that the industrialized countries would benefit more, so in some way it broadened the gap between the industrialized and the less-developed countries. Nevertheless, all of them moved up significantly.\”

Are Loss Leaders an Anticompetitive Free Lunch?

All experienced shoppers understand the concept of a \”loss leader.\” A store offers an exceptionally low price on a particular item that is likely to be popular–indeed, the price is so low that on that item, the seller may lose money. But by advertising this \”loss leader\” item, the store hopes to bring in consumers who will then purchase other items as well that are not marked down. Of course, from the consumer point of view, the challenge is whether, in buying the loss leader item and making other purchases at that store, you actually end up with a better deal than if you bought the loss leader and then went to another store–or perhaps even did all your shopping in one stop at a different store. 

The terminology of loss-leaders is apparently nearly a century old. The earliest usage given in the Oxford English Dictionary is from a 1922 book called Chain Stores, by W.S. Hayward and P. White, who wrote: \”Many chains have a fixed policy of featuring each week a so-called ‘loss leader’. That is, some well known article, the price of which is usually standard and known to the majority of purchasers, is put on sale at actual cost to the chain or even at a slight loss..on the theory..that people will be attracted to this bargain and buy other goods as well. Loss leaders are often termed ‘weekly specials’.\”

But every economist knows at least one example of the classic loss leader from the late 19th century, the \”free lunch.\” At that time, a number of bars and saloons would advertise a \”free lunch,\” but customers were effectively required to purchase beer or some other drink. If you tried to eat the free lunch without purchasing a drink, you would likely be thrown out. Thus, the origins of the TANSTAAFL abbreviation: \”There ain\’t no such thing as a free lunch.\”

What I had not known is that there is a serious argument in the industrial organization literature over whether loss leaders should be treated by antitrust authorities as an anticompetitive practice. In 2002, for example, Germany\’s highest court upheld a decision of the Germany\’s Federal Cartel Office that Wal-Mart was required to stop selling basic food items like milk and sugar below cost as a way of attracting customers. Ireland and France have also been known for their fairly strict laws prohibiting resale below cost.

The theory of \”Loss Leading as an Exploitative Practice\” is laid out by Zhijun Chen and Patrick Rey in the December 2012 issue of the American Economic Review. (The AER is not freely available online, but many academics will have access to this somewhat technical article through library subscriptions.) Their approach has two kinds of sellers: large firms that sell a wide range of product, and smaller firms that sell a more limited range of products. It also has some buyers who have a high time cost of shopping–and thus prefer to shop at one or a few locations–along with other buyers who have a lower time cost of shopping and thus are more willing to shop at many locations. They then build up a mathematical model in which large firms use loss leaders as a way of sorting consumers and attracting those who are likely to do all their shopping in one place. Once they have those consumers inside the store, they can then charge higher prices for other items. Thus, the result of allowing \”loss leaders\” in this model is that a number of consumers end up paying more, and large stores with a wide product range may tend to drive smaller stores out of the market.

Of course, the fact that it is possible put together a certain theoretical model with this outcome doesn\’t prove that it is the only possible outcome, or that it\’s the outcome that should be of greatest practical concern. Back in 2007, the OECD Journal of Competition Law and Policy hosted a symposium on \”Resale Below Cost Laws and Regulations.\” The articles can be read freely on-line with a slightly clunky browser here, or again, many academics will have on-line access through library subscriptions.  The general tone of these articles is that loss leaders should not be viewed as an anticompetitive practice. In no particular order, and in my own words, here are some of the points that are made:

  • In general, business practices that reduce prices of at least some items to consumers should be  presumptively supported by anticompetition authorities, unless there is a very strong case against them. In general, regulators should spend little time second-guessing prices that are too low, and more time looking at prices that are too high or practices that are clearly unfair to consumers. 
  • There are a number of reasons why loss leaders might tend to encourage competition. Offering a loss leader can encourage consumers to overcome their inertia of buying the same things at the same prices and try out a new product or a new store. Sometimes producers may want to reward customers who are especially loyal or buy in especially large volumes. It may be far more effective for a store to advertise low prices on a few items than to advertise that \”everything in the store is on average 2% cheaper than the competition.\” Loss leaders may be linked to other things the seller desires, like become a provider of credit to the buyer or raising the chance of getting detailed feedback from the buyer. Loss leaders may especially useful to new entrants in markets, seeking to gain a foothold.
  • Evidence from Ireland suggests that the grocery products where loss leaders are prohibited tend to have higher or rising prices, compared with other products. Since the products where loss leaders are prohibited tend to be more essential products, the prohibition on selling such items below cost tends to weigh most heavily on those with lower income levels.
  • In general, there has been a trend in retailing toward big-volume, low price retailers. But this trend doesn\’t seem to have been any slower in places where limitations on resale-below-cost were in place. And if the policy goal is to help small retailers, there are likely to be better targeted and less costly approaches than preventing loss leaders. Indeed, small firms may in some cases wish to entice buyers by offering loss leaders themselves.
  • It\’s not clear how to apply prohibitions against loss leaders to vertically integrated firms, since they have considerable ability to use accounting rules to reduce the \”cost\” of production–and then to resell at whatever price they wish. Even in a firm that is not vertically integrated, invoices for what is purchased can often include various discounts and allowances, and it is an administrative challenge for rules preventing resale-below-cost to take these into effect. If a firm buys products at a high wholesale price, and then the market price drops, presumably a resale-below-cost rule would prevent the firm from selling its products at all.
  • It\’s worth noting that laws preventing loss leaders are not the same as laws that block \”predatory pricing,\” where the idea is to drive a competitor out of business with with very low prices and then to charge higher prices on everything. This intertemporal scenario is quite different from what happens in a prohibition of loss leaders.
  • Allowing loss leaders doesn\’t mean allowing deceptive claims, where for example there is a very low price advertised but the item is immediately out of stock, or of unexpectedly low quality.

Market competition is a multidimensional affair, happening along many dimensions at once. I lack confidence that government regulators with a mandate to block overly low prices will end up acting in a way that will benefit consumers. 

Size of Global Capital Markets

While browsing through the Statistical Appendix to the October 2012 Global Financial Stability Report from the IMF (and yes, it\’s the sort of thing I do), I ran across these numbers on the size of the global financial sector. In particular, the sum of the value of global stocks, bonds, and bank assets is 366% of the size of global GDP.

Of course, there is some mixing of apples and oranges here: bank assets, debt, and equities may overlap in various ways. But contemplate the sheer size of the $255 trillion total!

Given this size, and given the financial convulsions that have rocked the world economy in the last few years, it seems time to remember an old argument and put it to rest. The old argument was whether finance should be considered part of economics. For me, the most memorable statement of that position occurred back when Harry Markowitz, who was later to win the Nobel Prize in economics for his role in developing portfolio theory, was defending his doctoral dissertation on this work back in 1955.

Markowitz had taken a job at RAND, and so was flying back to the University of Chicago to defend his dissertation. He often told this story in interviews: here\’s a version from a May 2010 interview.

\”I remember landing at Midway Airport thinking, \’Well, I know this field cold. Not even Milton Friedman will give me a hard time.\’ And, five minutes into the session, he says, \’Harry, I read your dissertation. I don\’t see any problems with the math, but this is not a dissertation in economics. We can\’t give you a Ph.D. in economics for a dissertation that isn\’t about economics.\’ And for most of the rest of the hour and a half, he was explaining why I wasn\’t going to get a Ph.D. At one point, he said, \’Harry, you have a problem. It\’s not economics. It\’s not mathematics. It\’s not business administration.\’ And the head of my committee, Jacob Marschak, shook his head, and said, \’It\’s not literature.\’

\”So we went on with that for a while and then they sent me out in the hall. About five minutes later Marschak came out and said, \’Congratulations, Dr. Markowitz.\’ So, Friedman was pulling my leg. At the time, my palms were sweating, but as it turned out, he was pulling my leg …\”

It\’s not clear to me that Friedman was just goofing on Markowitz. Yes, Friedman wasn\’t willing to block this dissertation. However, in a later interview, Friedman did not recall the episode but said: \”What he [Markowitz] did was a mathematical exercise, not an exercise in economics.\”

But Markowitz had the last word after winning the Nobel prixe. In his acceptance lecture back in 1990, he ended by telling a version of this story, and then said: \”As to the merits of his [Milton Friedman\’s] arguments, at this point I am quite willing to concede: at the time I defended my dissertation, portfolio theory was not part of Economics. But now it is.\”

The old view of economics and finance was that, except perhaps for a few exceptions like major bubbles, the real economy was the dog and the financial economy was the tail–and the tail couldn\’t wag the dog. But trying to study the problems of the modern world economy without taking finance into account would be incomprehensible.

Classroom Evaluation of K-12 Teachers

Proposals for evaluating the classroom performance of K-12 teachers are typically based on hopes and fears, not on actual evidence. Those who support such evaluations hope to improve the quality of teaching by linking evaluations to teacher pay and jobs. The teacher unions who typically oppose such evaluations fear that they will be used arbitrarily, punitively, even whimsically, but in some way that will make teaching an even harder job.

The dispute seems intractable. But in the December 2012 ssue of the American Economic Review, Eric S. Taylor (no relation!) and John H. Tyler offer actual real-world evidence on \”The Effect of Evaluation on Teacher Performance.\”  (The AER is not freely available on-line, but many in academia will have access through library subscriptions.)

Taylor and Tyler have evidence on a sample of a little more than 100 mid-career math teachers in the Cincinnati Public Schools in fourth through eighth grade. These teachers were hired between 1993–1994 and 1999–2000. Then in 2000, a district planning process called for these teachers to be evaluated in a  a year-long classroom observation–based program, which then occurred some time between 2003–2004 and 2009–2010. The order in which teachers were chosen for evaluation, and the year in which the evaluation occurred, were for practical purposes random. The actual evaluation involved observation of actual classroom teaching. But the researchers were also able to collect evidence on math test scores for students. Although these scores were not part of the teacher evaluation, the researchers could then look to see whether the teacher evaluation process affected student scores. (Indeed, one of the reasons for looking at math teachers was because scores on a math test provide a fairly good measure of student performance, compared with other subjects.) Again, these were mid-career teachers who typically had not been evaluated in any systematic way for years. 

Here\’s how the evaluation process worked: \”During the TES [Teacher Evaluation System] evaluation year, teachers are typically observed in the classroom and scored four times: three times by an assigned peer evaluator—high-performing, experienced teachers who are external to the school—and once by the principal or another school administrator. Teachers are informed of the week during which the first observation will occur, with all other observations being unannounced. The evaluation measures dozens of specific skills and practices covering classroom management, instruction, content knowledge, and planning, among other topics. Evaluators use a scoring rubric, based on Charlotte Danielson’s Enhancing Professional Practice: A Framework for Teaching (1996), which describes performance of each skill and practice at four levels: “Distinguished,” “Proficient,” “Basic,” and “Unsatisfactory.” …After each classroom observation, peer evaluators and administrators provide written feedback to the teacher, and meet with the teacher at least once to discuss the results. \”

A common pattern is often found in these kinds of subjective evaluations: that is, the evaluators are often pretty tough in grading and commenting on lots of specific skills and practices, but then they still tend to give a high overall grade. This pattern occurred here, as well. The authors write: \”More than 90 percent of teachers receive final overall TES scores in the “Distinguished” or “Proficient” categories. Leniency is much less frequent in the individual rubric items and individual observations …\”

In theory, teachers who were fairly new to the district could lose their job if their evaluation score was low enough, and those who scored very high could get a raise, but because almost everyone was ending up with fairly high overall scores, so the practical effects of this evaluation in terms of pay and jobs was pretty minimal.

Nevertheless, student performance not only went up during the year that the evaluation happened, but student performance stayed higher for teachers who had been evaluated in previous years. \”The estimates presented here—greater teacher productivity as measured by student achievement gains in years following TES evaluation—strongly suggest that teachers develop skill or otherwise change their behavior in a lasting manner as a result of undergoing subjective performance evaluation in the TES system. Imagine two students taught by the same teacher in different years who both begin the year at the fiftieth percentile of math achievement. The student taught after the teacher went through comprehensive TES evaluation would score about 4.5 percentile points higher at the end of the year than the student taught before the teacher went through the evaluation. … Indeed, our estimates indicate that postevaluation improvements in performance were largest for teachers whose performance was weakest prior to evaluation, suggesting that teacher evaluation may be an effective professional development tool.\”

By the standards typically prevailing in K-12 education, the idea that teachers should experience an actual classroom evaluation consisting of four visits in a year, maybe once a decade or so, would have to be considered highly interventionist–which is ludicrous. Too many teachers perceive their classroom as a private zone where they should not and perhaps cannot be judged. But teaching is a profession, and the job performance of professionals should be evaluated by other professionals.  The Cincinnati evidence strongly suggests that detailed, low-stakes, occasional evaluation by other experienced teachers can improve the quality of teaching over time.  Maybe if some of the school reformers backed away from trying to attach potentially large consequences to such evaluations in terms of pay and jobs, at least a few teachers\’ unions would be willing to support this step toward a higher quality of teaching.

Note: Some readers might also be interested in this earlier post from October 3, 2011, \”Low-Cost Education Reforms: Later Starts, K-8, and Focusing Teachers.\” 

Will the U.S. Dollar Lose its Preeminence?

I get asked once a month or so if the U.S. dollar is likely to lose its global preeminence.  John Williamson has a nice discussion of this topic in \”The Dollar and US Power,\” which is available at the website of the Peterson Institute of International Economics.

Williamson first points out that the dollar is indeed the preeminent global currency (citations omitted): \” The US dollar is absolutely dominant as the intervention currency: Most countries intervene in nothing except dollars. It is the major unit in which about 60 percent of official foreign exchange reserves are held … It was estimated in the past that close to a half of all international trade was invoiced in dollars (Hartmann 1998), as opposed to under 12 percent of world trade that involved
the United States in 2011. So far as foreign exchange trading is concerned, most takes place against the dollar, resulting in a share of foreign exchange trading of about 85 percent … For the moment, the dollar is quite unrivalled.\”

How does the preeminence of the U.S. dollar benefit the U.S. economy? Williamson points out the classic tradeoff. On one side, the advantages of \”seignorage;\” on the other side, an inability to control one\’s own exchange rate. Here\’s Williamson on seignorage:

\”The standard economic analysis holds that the United States gains by international use of the dollar because of the collection of seigniorage. Historically the term seigniorage meant the ability of the sovereign to make a profit when it minted metal into money. In our context the term is used to signify the ability to make a profit from international holding of the currency. There are generally reckoned to be two sources of profit from foreign holdings of the dollar. One arises from holdings of dollar bills (in practice, $100 bills) by foreigners (in practice, mainly drug dealers): In effect, the US gains an interest-free loan to the extent that foreigners hold dollar bills. The other arises from the fact that many foreigners wish to hold dollar assets. The preferred form of assets are US Treasury bills, and therefore the interest rate on US Treasury bills is somewhat lower than it otherwise would be; and the saving is regarded as a part of seigniorage.\”

However, the gains from zero-interest loan of the use of U.S. currency to drug dealers, along with those who borrow in U.S. dollars getting an interest rate that\’s a tiny bit lower, are not large. The tradeoff is that when everyone else is using your currency, then the exchange rate value of that currency will be largely determined in global markets.

Williamson also tackles the question of whether the preeminence of the U.S. dollar gives the U.S. government additional power in the practical world of power politics. He writes:\”I have the impression that the additional national power which stems from commanding an international currency tends to be exaggerated by strategic thinkers. One needs to designate the specific mechanisms which would be involved rather than assuming the result.\”

The one possible exception, he argues, is that a U.S. dollar standard might make it more possible for the U.S. government to enforce financial sanctions on unfriendly governments. \”It is difficult to see how US power in many dimensions is enhanced by virtue of the widespread private international use of the dollar. For example, the US ability to wage war in Iraq and Afghanistan was in no way dependent upon private international use of the dollar. … There seems to be one large exception: the ability of a country to enforce a financial blockade, such as that currently directed against specified Iranian entities. … The United States can order its own companies not to do business with Iran, but this power is present in any sovereign government and is in no way dependent on the role of the dollar. But because third countries generally pay Iran in dollars, the United States government does have additional leverage. Any payment in dollars ultimately involves a transfer on the books of the Federal Reserve banks …  The Fed can require that any institution for which it does business has to certify that it either has no prohibited connection with Iran or is in receipt of a waiver. They can similarly require that an institution that contracts with the Fed impose similar requirements on the institutions on behalf of which they are acting. (Of course, the Fed does not inspect each transaction, but depends upon financial institutions to do the screening, with stiff penalties possible if prohibited transactions slip through. A recent example occurred when Standard Chartered Bank was accused by the New York state Department of Financial Services of having hidden some $250 billion of financial transactions with Iran.) Thus the United States has the ability to stop transactions in terms of dollars. Insofar as foreign institutions insist on paying out of their dollar holdings, and/or Iran insists on receiving dollars, Iran is going to be vulnerable to US pressure.\”

In a global economy where the total size of China\’s economy will probably exceed that of the U.S. within the next few years, can the U.S. dollar stay on top? As Williamson points out, the key issue here is not the size of a nation\’s domestic economy, but rather the fact that  the U.S. dollar is already being extensively used for international transactions gives it a kind of momentum, making it likely that it will continue being used for this purpose for at least a few decades into the future. Williamson writes:

\”Those who wish to transact in this [global] market are not greatly interested in the fact that the good citizens of Idaho overwhelmingly use the dollar, but they are vitally interested in the fact that the dollar is already used extensively in London, Frankfurt, Dubai, Singapore, Hong Kong, and wherever else international trades are executed. This factor gives a great deal of inertia to the international role of currencies. Because of inertia, I see the dollar having a great advantage over any other national currency for the next quarter of a century. (However, I would hesitate to forecast for as long as 50 years.)\”

Similarly to Williamson, I don\’t see the global preeminence of the U.S. dollar as a large-scale advantage for the U.S. economy, although it should make it at least a little easier for U.S. banks and firms to operate in world markets. One can draw up schemes and scenarios in which the U.S. dollar is replaced by some mix of the euro, China\’s renminbi yuan, Japan\’s yen, India\’s rupee, and perhaps a few others. But such a change would require an enormously high level of international financial cooperation, and thus seems highly unlikely. By default, the U.S. dollar seems likely to remain the preeminent global currency for some decades to come.

Annual Report from the Conversable Economist: 2012

At the beginning of each year, it seems useful to reflect on what I\’m doing with this blog. Last year\’s report is here.

For me, there are two main reasons for sustaining the Conversable Economist blog: one personal, one social. The personal reason is that writing these posts helps organize and motivate my own reading and thinking. It encourages me to spend a little extra time tracking down a report or reading through a working paper. When I need to track down a figure or a table or a quotation that I just know I saw someplace, I use the \”search\” command on the blog to find it. Thus, the blog extends the capacity of my own memory and improves my ability to access past information.

My social motivations for the blog seem less obvious to at least some readers, who occasionally send me notes suggesting more opinions of my own and more day-to-day commentary on the opinions of others. As I see it, the world and the web are overflowing with opinions, and pure opinion is a devalued currency. Instead, my approach is every post should give you some facts or background or analysis that you might not otherwise have seen. I am ever-mindful of the advice from the classic work on expository prose, The Elements of Style, by William Strunk and E.B. White (Third  Edition, 1979, Section V, Rule 17):

\”Unless there is a good reason for its being there, do not inject opinion into a piece of writing.  We all have opinions about almost everything, and the temptation to toss them in is great. To air one’s views gratuitously,  however, is to imply that the demand for them is brisk, which may not be the case, and which, in any event, may not be relevant to the discussion. Opinions scattered indiscriminately about leave the mark of egotism on a work.\”

I am fully aware that expressing concern about \”the mark of egotism\” while writing for social media in the 21st century marks me as a person out of step with my time.

But of course, I\’m making no particular effort to hide my opinions, either. They are manifest in my choices about what to read, what topics to blog about, what figures or tables to reproduce, and in comments I often include in the posts. I\’m just not expressing my opinions in the sweeping generalizations and  \”you\’re an idiot\” vernacular of the web.  As I explain on my FAQs page, the \”Conversable Economist\” name for this blog is drawn from an essay by David Hume, who lamented the \”separation of the learned from the conversable world.\” Hume wrote: “I cannot but consider myself as a kind of resident or ambassador from the dominions of learning to those of conversation, and shall think it my constant duty to promote a good correspondence betwixt these two states, which have so great a dependence on each other.”

When I think about the style of exposition appropriate to acting as an ambassador from academic economics to the conversable world, I\’m reminded of a comment from another classic work on expository prose, H. W. Fowler\’s  A Dictionary of Modern English Usage (1926). (And yes, when you work as an editor, you have a shelf full of such books.) Under the heading on “French Words,” Fowler offers advice about how to use specialized knowledge or vocabulary with readers who aren\’t necessarily familiar with the terms. Although he is writing about the use of French terms in English composition, the lesson applies to the use of economics terms in English composition, too.  Fowler wrote:

\”Display of superior knowledge is as great a vulgarity as display of superior wealth — greater, indeed, inasmuch as knowledge should tend more definitely than wealth towards discretion & good manners. … To use French words that your reader or hearer does not know or does not fully understand, to pronounce them as if you were one of the select few to whom French is second nature when he is not one of those few (& it is ten thousand to one that neither you nor he will be so), is inconsiderate & rude.\”

And yes, I am fully aware that expressing concerns about how excessive use of terminology is a \”vulgarity,\” about how \”knowledge should tend more definitely than wealth towards discretion & good manners\” and about a distaste for being \”inconsiderate & rude,\” while at the same time writing for social media in the 21st century, marks me yet again as a person out of step with my time.

Because one of my purposes in continuing this blog is social, I do care about readership. During the last few months of 2012, about 1000 people were signed up to receive each blog post by RSS or email. In addition, the blog is receiving an average of about 2500 pageviews per day. Pageviews are somewhat seasonal–for example, they dropped off in July and August when much of academia is on summer break–but overall the number has been rising through 2012. In November, my desire to find ways to connect to more readers overcame my innate aversion to Twitter, so now it is possible to receive a tweet each time I put up another post at the blog.

If you are someone who enjoys the blog, I encourage you to check in regularly, sign up, and recommend it to others. As always, I\’m glad to hear from readers of the blog with either general or specific feedback, at conversableeconomist@gmail.com.

Greatest Hits of 2012

The Conversable Economist–that\’s me–is taking the rest of the year off. For your delectation, here are 16 of the most-viewed posts that appeared in 2012, at least one from each month, listed here in reverse chronological order. Of course, I encourage you to spend your holidays surfing the archives as well.

\”Paper Towels vs. Air Dryers.\”  (December 10, 2012)

Somewhat to my surprise, this post was by far the most popular of 2012. My idea was to provide an example of the structured analysis of a tradeoff that might be especially useful to classroom teachers and of mild interest to others. However, the post also clearly touched a broader audience and generated a wave of heartfelt reactions from people who just plain love their paper towels.

\”China\’s Economic Growth: A Different Storyline.\” (November 19, 2012)

The standard story of China\’s economic growth involves a story of how low wages in China have combined with an undervalued exchange rate to create huge trade surpluses that drive economic growth. This post pokes some holes in that story. China\’s very rapid economic growth in the 1980s and 1990s didn\’t involve trade surpluses, which only started expanding in the 2000s when China\’s rates of wage growth were taking off. And China\’s currency was flat when the trade surpluses took off, and has how been strengthening for six years. The post proposes a different storyline for China\’s growth, rooted in how China\’s exports took off after China joined the World Trade Organization in 2001, and China\’s underdeveloped financial system had no way to turn all of these earnings by firms into national consumption.

\”Marginal Tax Rates on the Poor and Lower Middle Class\” (November 16, 2012)

Consider the situation of a low-income person who is eligible for various public support programs. However, each time that person earns an additional $1 in income, the amount of government support is reduced by, say, 30 or 40 cents. The economic incentives here are the same as those of a high marginal income tax rate. From this perspective, the marginal tax rates faced by the poor and the lower middle class are often just about as high as the marginal tax rates for those with high incomes.

\”Hydraulic Models of the Economy.\” (November 12, 2012)

Two famous economists of the past built hydraulic models of the economy: that is, economic models where flows of spending and saving, as well as price levels, were revealed by liquid flowing through a system of tubes and containers. Bill Phillips–the originator of the Phillips curve–built his model back in the late 1940s. Irving Fisher, the originator of much of modern monetary economics, built his model as part of his dissertation back in 1891. This post tells the story of the models–with pictures.

\”Driverless Cars.\” (October 31, 2012)

Driverless cars are coming: Google has already been testing prototypes on public roads. How might this invention change our lives? Fewer accidents. More productive or relaxing time spent in transit. More cars on the road, so less need for infrastructure. Greater energy efficiency. Remote parking–just tell your car to come and get you when you are ready. The possibility of shared cars, coming when you call. Greater mobility for those too young or too old to drive safely. Drive overnight, sleeping in your car, and arrive in the morning. The possibilities just keep coming. 

\”Are CEOs Overpaid?\” (September 14, 2012)

It may seem that the answer should obviously be \”yes,\” but a number of facts suggest a more nuanced answer. CEO pay relative to household income did spike back in the dot-com boom in the late 1990s, but since then, it is relatively lower. CEO pay relative to the top 0.1%of the income distribution is now back to the levels common in the the 1950s. The pay of those at the top of other highly-paid occupations has grown dramatically as well, like lawyers, athletes, and hedge fund managers. CEOs are fired sooner than they used to be, on average, especially when the stock price doesn\’t perform well.

\”Are Groups More Rational than Individuals?\” (August 30, 2012)

A body of evidence from laboratory economics experiments suggests: 1) Groups are often more rational and self-interested than individuals; and 2) This behavior doesn\’t always benefit the participants in groups, because the group can be less good than individuals at setting aside self-interest when cooperation is more appropriate. The greater rationality of groups arises in part because when there is a problem to be solved, several people working on the problem are more likely to discern the best solution than just one person. But in some situations, cooperation can benefit all parties. This same evidence suggests that individuals are often better at putting aside narrow self-interest and looking to cooperative outcomes than groups. One ultimate goal in this literature is to figure out when it is more useful for organizations to operate through groups, and when it is more useful for organizations to delegate individuals to make decisions.

\”What is a Beveridge Curve and What is it Telling Us?\” (August 20, 2012)

A Beveridge curve is a graphical relationship between job openings and the unemployment rate. The Beveridge curve seems to have shifted out in the last few years, meaning that for a given number of job openings, the unemployment rate is higher than it used to be. Some possible explanations include a mismatch between the skills of unemployed workers and the available jobs; incentives from  extended unemployment insurance that have slowed the incentive to take available jobs; and heightened uncertainty over the future course of the economy and economic policy. Over the middle term, these factors should fade, and the unemployment rate will then fall.

\”The Improving U.S. Labor Market.\” (July 17, 2012)

In July, the unemployment rate seemed stuck at about 8%. However, certain more detailed measures of labor markets were showing signs of life. For example, the ratio of unemployed people per job opening had spiked above 6 at the worst of the recession, but by May 2012, the ratio had fallen to about 3.5. Hires had increased. Even the trend toward more people quitting their jobs in mid-2012 was probably good news, because people are more likely to quit when they perceive that other labor market options are available.

\”Wealth by Distribution, Region, and Age.\” (June 13, 2012)

Once every three years the Federal Reserve carries out the Survey of Consumer Finance,which is the canonical source for data on household wealth. Results from the 2010 survey were just being released. One headline finding is that the median household wealth level fell from $126,000 in 2007 to $77,000 in 2010.

\”McWages Around the World.\” (May 16, 2012)

The study underlying this May 16 post looked at one set of jobs that are largely identical in countries around the world: food preparation jobs at McDonald\’s. It provides strong evidence that workers with the same skills are being rewarded very differently in different countries. I wrote: \”[T]hese measures show that the most important factor determining wages for most of us is not our personal skills and human capital, or our effort and initiative, but whether we are using those skills and human capital in the context of a a high-productivity or a low-productivity economy.\”

\”Why Does the U.S. Spend More on Health Care than Other Countries?\” (May 14, 2012)

At the end of this post, I wrote: \”The question of why the U.S. spends more than 50% more per person on health care than the next highest countries (Switzerland and Netherlands), and more than double per person what many other countries spend, may never have a simple answer. Still, the main ingredients of an answer are becoming more clear. The U.S. spends vastly more on hospitalization and acute care, with a substantial share of that going to high-tech procedures like surgery and imaging. The U.S. does a poor job of managing chronic conditions, which then lead to episodes of costly hospitalization. The U.S. also seems to spend vastly more on administration and paperwork, with much of that related to credentialing, documenting, and billing–which is again a particular important issue in hospitals. Any honest effort to come to grips with high and rising U.S. health care costs will have to tackle these factors head-on.\” I suspect that this post must have been assigned as reading to some classes, because the pageviews kept climbing steadily through the fall semester.

The Price of Nails. (April 5, 2012)

Nails may seem like an everyday product, but this analysis shows how their price has fallen dramatically over time, by a factor of about 15 from the mid-1700s to the mid-1900s. Back around 1800, nails alone could represent 10% of the cost of a house, and household purchases of nails were of the same magnitude, relative to GDP, as current household purchases of computers or of airfares. Even in a seemingly simple product, technological innovation has been quite dramatic: hand-forged, nails, cut nails, wire nails, and more recently the emergence of the nail gun.

\”Top Marginal Tax Rates: 1958 and 2009.\” (March 16, 2012)

Top marginal income tax rates used to be much higher back in the 1950s and 1960s, as high as 91%. This post looks at how top tax rates, and the money collected by those rates, changed over time. The tip-top rates applied to only a small group, and so the share of income  taxes paid by those in the top tax brackets today is actually higher now than back in the 1960s. The marginal tax rates paid by those in the middle class were also often higher in the 1960s. 

\”Six Adults and One Child.\” (February 15, 2012)

The title of this post refers to a pattern observed in China after several generations of the one-child policy: that is, a single child walking around a park, closely followed by two parents and four grandparents. A fertility implosion is coming around the world, and family reunions of the future are likely to be made up of four and five generations of relative, who will greatly outnumber the children on hand.

\”Giffen Goods in Real Life.\”  (January 4, 2012)

Every economics student at some point must confront the theory behind a Giffen good, which is the case in which a higher price for a good leads to people purchasing more of that good. I have usually taught the example as a theoretical curiosity, but some plausible evidence has emerged that in certain very low-income parts of China, rice is a Giffen good. In these areas, rice is a major part of the diet of poor people. When the price of rice rises, the effective buying power of their income is reduced, which then pushes them to give up on other items and consume even more rice.

Real Tree or Artificial Tree?

My family always had real Christmas trees when I was growing up. I\’ve always had real trees as an adult. Living in my own little bubble, it thus came as a shock to me to learn that, of the households that have Christmas trees, over 80% use an artificial tree, according to Nielsen survey results commissioned by the American Christmas Tree Association  (which largely represents sellers of artificial trees). But in a holiday season where the focus is often on whether we are naughty or nice, what choice of tree has greater environmental impact?

There seem to be two main studies often quoted on this subject: \”Comparative Life Cycle Assessment (LCA) of Artificial vs. Natural Christmas Tree,\” published by a Montreal-based consulting firm called ellipsos in February 2009, and \”Comparative Life Cycle Assessment of an Artificial Christmas Tree and a Natural Christmas Tree,\” published in November 2010 by a Boston consulting firm called PE Americas on behalf of the aforementioned American Christmas Tree Association.Both studies assume the artificial tree is manufactured in China and transported to North America.  (If readers know of other recent published studies, please send me a link!)

Here are some of the main messages I take away from these studies:

1) One artificial tree has greater environmental impact than one natural tree. However, an artificial tree can also be re-used over a number of years. Thus, there is some crossover point, if the artificial tree is used for long enough, that its environmental effect is less than an annual series of trees. For example, the ellipsos study finds that an artificial tree would need to be used for 20 years before its greenhouse gas effects would be less than those of an annual series of natural trees. The PE Americas study offers a wide range of scenarios, and summarizes, but here is the situation \”for the base case when individual car transport distance for tree purchase is 2.5 miles each way. Because the natural tree provides an environmental benefit in terms of Global Warming Potential when landfilled, and Eutrophication Potential when composted or incinerated, there is no number of years one can keep an artificial tree in order to match the natural tree impacts in these cases. …  For all other scenarios, the artificial tree has less impact provided it is kept and reused for a minimum between 2 and 9 years, depending upon the environmental indicator chosen.\”

2) The full analysis needs to look at effects across all the full life-cycle of the tree, whether natural or artificial. This seems to involve the following steps.

  • Under what conditions is the tree manufactured or cultivated, with what use of energy, fertilizer, and logging methods? 
  • By what combination of transportation mechanisms is the finished tree moved to the home? A substantial share of artificial trees are manufactured in China and then shipped to North America.
  • What are the different issues in use of the tree, including use of water and emissions of fumes?
  • What is the end-of-life for the tree? For example, the carbon in a natural tree will be stored for some decades if the tree goes into a landfill, but not if if is composted or incinerated.

3) The full analysis also needs to look at a range of possible effects. For exaaple, the PE America study looked at \”global warming potential (carbon footprint), primary energy demand, acidification potential, eutrophication potential, and smog potential.\” Here\’s a figure showing 14 categories of analysis from the ellipsos study, with a comparison between natural and artificial trees on a number of dimensions.

The ellipsos study sums up this way: \”When aggregating the data in damage categories, the results show that the impacts for human health are approximately equivalent for both trees, that the impact for ecosystem quality are much better for the artificial tree, that the impacts for climate change are much better for the natural tree, and that the impacts for resources are better for the natural tree …\”

4) In the context of many other holiday and everyday activities, the environmental effects of the tree are small. For example,the studies offer some comparisons of the environmental effects of the tree compared with the electricity used to light the tree, the driving by a household to pick up the tree, and even the environmental effect of the tree stand.
  
For example, in comparing Primary Energy Demand for the tree and the energy demand for lighting the tree. For an artificial tree, the PE Americas study reports: \”The electricity consumption during use of 400 incandescent Christmas tree lights during one Christmas season is 55% of the overall Primary Energy Demand impact of the unlit artificial tree studied, assuming the worst‐case scenario that the artificial tree is used only one year. For artificial trees kept 5 and 10 years respectively, the PED for using incandescent lights is 2.8 times and 5.5 times that of the artificial tree life cycle.\” For a natural tree: \”The life cycle Primary Energy Demand impact of the natural tree is 1.5 ‐ 3.5 times less (based on the End‐of‐Life scenario) than the use of 400 incandescent Christmas tree lights during one Christmas season.\”

In comparing the environmental effects of driving with those of the tree, ellipsos writes: \”Due to the uncertainties of CO2 sequestration and distance between the point of purchase of the trees and the customer’s house, the environmental impacts of the natural tree can become worse. For instance, customers who travel over 16 km from their house to the store (instead of 5 km) to buy a natural tree would be better off with an artificial tree. … [C]arpooling or biking to work only one to three weeks per year would offset the carbon emissions from both types of Christmas trees.

The PE Americas report strikes a similar theme: Initially, global warming potential (GWP) for the landfilled natural tree is negative, in other words the life cycle of a landfilled natural tree that is a GWP sink. Therefore, the more natural trees purchased, the greater the environmental global warming benefit (the more negative GWP becomes). However, with increased transport to pick up the natural tree, the overall landfilled natural tree life cycled becomes less negative. When car transport becomes greater than 5 miles (one‐way), the overall life cycle of the natural tree is no longer negative, and there is a positive GWP contribution.\” 

Even the tree stand for a natural tree has an environmental cost that can be considered in the same breath with the costs of a natural tree. PE Americas: \”The tree stand is a significant contributor to the overall impact of the natural tree life cycle with impacts ranging from 3% to 41% depending on the impact category and End‐of‐Life disposal option.\”

I would add that the environment effect of the ornaments on the trees may be as large or greater than the effect of the tree itself. Data from the U.S. Census Bureau shows that America imported $1 billion in Christmas tree ornaments from China (the leading supplier) between January to September 2012, but only $140 million worth of artificial Christmas trees. Thus, spending on ornaments is something like six times as high as spending on trees. The choice of what kind of lights on the tree, or whether to drape the house and front yard with lights, is a more momentous environmental decision than the tree itself.

Of course, these kinds of comparisons don\’t even try to compare the environmental cost of the tree with the cost of the presents under the tree, or the long-distance travel to attend a family gathering. Thus,  the PE Americas study concludes: \”Consumers who wish to celebrate the holidays with a Christmas tree should do so knowing that the overall environmental impacts of both natural and artificial trees are extremely small when compared to other daily activities such as driving a car. Neither natural nor artificial Christmas tree purchases constitute a significant environmental impact within most American lifestyles.\” Similarly, ellipsos writes: \”Although the dilemma between the natural and artificial Christmas trees will continue to surface every year before Christmas, it is now clear from this LCA study that, regardless of the chosen type of tree, the impacts on the environment are negligible compared to other activities, such as car use.\”

Certainly, celebrations at holidays and big events can sometimes be exorbitant and over the top. But the use of a Christmas tree, and the choice between a natural tree or an artificial tree, is a small-scale luxury. If the environmental issue is bothering you, even knowing these facts, make a resolution to use your artificial tree for a few more years, rather than replacing it, or to save some energy in January by driving less or being more vigilant about turning off unneeded lights. Gathering around the tree should be one less reason for moralizing around the holidays, not one more. So celebrate with good cheer and generous moderation.

The Sandy Hook Mass Killling: A Meditation on Living in the Global Village

I have three children, ages 14, 13, and 10, and so of course my wife and I, like so many other families, have been talking with the children about the mass killings at Sandy Hook Elementary School in Newtown, Connecticut.The conversations have made me think again about Marshall McLuhan\’s idea of the \”global village,\” and the challenges that it poses in the 21st century for cognitively limited human beings.

When McLuhan wrote about the \”global village\” in the early 1960s, he was pointing out that in the pre-electronic age, people\’s main experience of the world involved those who lived nearby. Of course, other news filtered in by way of media and gossip. But the arrival of electronic technology creates a common set of experiences and perceptions. The telegraph provided much higher-speed connections about news events. Radio broadcasts of sporting events, music, entertainment shows, presidential speeches, and news meant that many people across the country were sharing the common experience of the broadcast as it happened. Movies and television then added a visual component, so that people from all over the country, and in some cases the world, began to share a common set of mental images of what events and people were important and what those events and people looked like–all based on highly edited clips of film.

Of course, we have gone far beyond McLuhan\’s global village of the 1960s. In the internet age, anyone can post digital images and sound to the world. When a 24/7 media environment combines with social media, we now live in the global neighborhood, or perhaps even in a global extended family.

Nothing in the evolutionary history of humans particularly prepares us to process the information from living in this information environment. For example, did you know that the deadliest school massacre in U.S. history was a bomb attack on a school in Michigan back in 1927? But at that time, there was no national outcry, no presidential proclamations, no screaming news headlines all over the country. In 1927, mass killings at a school in Michigan seemed so far away for most of America; in 2012, the deaths in Connecticut feel so close for most of us.

This shift in the content and immediacy of the information we receive, together with the experience of receiving it simultaneously across the country, creates a severe challenge for  how to think about it.
Daniel Kahneman, who shared the Nobel prize in economics back in 2002, write about how humans think in his recent book: Thinking, Fast and Slow. I haven\’t yet finished reading the book, and for a summary I\’ll turn here to a review by Andrei Shleifer that was published in December 2012 issue of the Journal of Economic Literature. Andrei writes:

\”Kahneman’s book is organized around the metaphor of System 1 and System 2 …. As the title of the book suggests, System 1 corresponds to thinking fast, and System 2 to thinking slow. Kahneman describes System 1 in many evocative ways: it is intuitive, automatic, unconscious, and effortless; it answers questions quickly through associations and resemblances; it is nonstatistical, gullible, and heuristic. System 2 in contrast is what economists think of as thinking: it is conscious, slow, controlled, deliberate, effortful, statistical, suspicious, and lazy (costly to use)….  For Kahneman, System 1 describes “normal” decision making. System 2, like the U.S. Supreme Court, checks in only on occasion. Kahneman does not suggest that people are incapable of System 2 thought and always follow their intuition. System 2 engages when circumstances require. Rather, many of our actual choices in life, including some important and consequential ones, are System 1 choices, and therefore are subject to substantial deviations from the predictions of the standard economic model. System 1 leads to brilliant inspirations, but also to systematic errors.\”

In the aftermath of the Sandy Hook shootings, my children\’s school district has been sending out emails and letters. One of them gave the statistics that there are 132,656 K-12 schools in the United States, and that including what happened last week, there have been 32 school shootings in the last 25 years. Of course, this is classic System 2 information, appealing to the conscious, controlled, statistical side of my brain. I find it hard even to read these kinds of statistics in the aftermath of the deaths; I can literally feel my brain wanting to escape back to automatic and effortless responses.

I find myself wondering about the possible effects of being a cognitively limited person living in a global neighborhood defined by the rapidly expanding capabilities of information and communications technology.

 One possible outcome of living in a global village–or a global neighborhood–is that one has a sense of access and connection to a far larger number of people and experiences. I prefer to live in a world where I can grieve, even in my separate and unattached way, to the people of Newtown. A global neighborhood can be a world of greater empathy and connection.

But another possible outcome of living in a global neighborhood is that, given the ability to connect to every act of evil that occurs, we will be exposed to many more acts of evil. Even if the overall quantity of evil is not rising, our limited cognitive facilities combined with the surrounding information and media environment will cause us to perceive evil as rising sharply. In other words, instead of the global neighborhood causing us to have broader access and connection to the full range of human and natural experience, instead we expand our access to the evil, violent, grotesque, and sentimental.

Yet another possible outcome is that we become numbed and overwhelmed by the by the wide range of input that we are receiving, such that all electronic input seems to have a similar quality. Real-world violence merges into movie violence merges into video-game violence. A personal putdown on a situation comedy is like a personal putdown between two talking heads on a news commentary show is like a personal putdown via social media. Reactions blur between the real and the fictional, the impersonal and the personal. We have ever-heightened attention to events for an ever-shorter window of time, until nothing means very much for very long–until it is stoked by a news hook like new information or an anniversary. 

I want to live in the global neighborhood, with a heightened sense of connection. I want to know what happened before the Sandy Hook killings, and what is happening since. (I confess that I have little taste for details of what happened during the actual episode.)

I don\’t want to be overwhelmed by the old, sad, true reality that there is always something terrible happening somewhere, just because it is now possible to consume a perpetual diet of such events. I don\’t want the details of the Sandy Hook killings to terrify my children, or to move me to tears (any more than they already have). I want to be a person who counts his blessings, not one who counts the world\’s disasters.

I want to have an attention span considerably longer and broader than the news cycle. I  don\’t want to be a person who reacts to the horror of children being killed in some knee-jerk, automatic, sentimentalized fashion, although the controlled and deliberate side of my mind sheers away from contemplating the horror too closely. I don\’t want to forget the challenges and joys of the children at the 132,000 other schools across the country. 

As a human being with limited cognitive abilities, I struggle with being who I want to be in the face of the Sandy Hook mass killing. I struggle in my roles as a parent, as a citizen, as a member of the human race.

Africa: The Jobs Challenge

The economies of sub-Saharan Africa have been experiencing fairly rapid growth over the last decade. But for most people, the way that they share in economic growth is by having a steady wage-paying job. A report released in August by the McKinsey Global Institute considers the problem of \”Africa at Work: Job Creation and Inclusive Growth.\”

As a starting point, Africa\’s economic growth sped up around 2000, and for the decade from 2000-2010, it was the second-fastest growing region of the world. If one counts a \”consuming household\” as a household with over $5000 per year in income, the number of African households in this category rose from about 59 million to 90 million over this decade.

But for sharing this growth broadly across the population, people need stable wage-paying jobs. For example, when GDP grows because of a rise in the mining sector, wage-paying jobs do  not grow commensurately as much. McKinsey reports: \”The continent\’s official unemployment rate is only 9 percent. Today, however, just 28 percent of Africa\’s labour force has stable wage-paying jobs.\” In some countries like Ethiopia, Mali, and the Democratic Republic of Congo, fewer than 10 percent of adults have stable wage-paying jobs. McKinsey refers to those who live with subsistence agricultural jobs or informal self-employment as having \”vulnerable employment,\” which strikes me as a nicely understated name for a very difficult life situation. Here\’s a figure with some information about the number of wage paying jobs across countries. 

Africa certainly has potential for creation of wage-paying jobs in areas like commercial farming, manufacturing, retail and hospitality–all labor-intensive sectors of the economy that in different ways can tap into world markets and export demand. In a McKinsey survey of employers in a number of countries, more than half named macroeconomic problems as a main factor holding back job growth were macroeconomic and 40 percent names political instability.

In the figure above, diversified economies are much more likely to have a larger share of their workers in stable, wage-paying jobs. McKinsey points out that when countries South Korea, Thailand, and Brazil were at Africa\’s current stage of economic development, they were all more successful in  creating more wage-paying jobs. Ultimately, the difficulty is that an employer with employees is a particular form of social organization, which in turn is affected by political, legal, regulatory, and social factors. In many countries in Africa, the particular form of social organization that is a wage-paying firm with fairly stable and steady employment is not well-known or well-established. Africa\’s prospects for inclusive economic growth may well depend on its ability to foster the conditions for starting and growing such business organizations.

For some previous posts on whether Africa is at long last generating self-sustaining growth, see \”Africa\’s Economic Development\” (June 13, 2011), \”Africa\’s Growing Middle Class\” (September 19, 2011), and \”Africa\’s Prospects: Half Full or Half Empty?\” (December 15, 2011).