Neuromyths about the Brain and Learning

\”Neuromyths are false beliefs, often associated with education and learning, that stem from misconceptions or misunderstandings about brain function. Over the past decade, there has been an increasing amount of research worldwide on neuromyths in education.\” The Online Learning Consortium has published an International report: Neuromyths and evidence-based practices in higher education. by the team of Kristen Betts, Michelle Miller,  Tracey Tokuhama-Espinosa,

Patricia A. Shewokis, Alida Anderson,  Cynthia Borja,  Tamara Galoyan,  Brian Delaney, 
John D. Eigenauer, and Sanne Dekker. 
They draw on previous surveys and information about \”neuromyths\” to construct their own online survey, which was sent to people inn higher education The response rate was low, as is common with online surveys, so consider yourself warned. But what\’s interesting to me is to read the \”neuromyths\” and to consider your own susceptibility to them. More details at the report itself, of course.

Homage to Bill Goffe for spotting this report. 

A Nobel for the Experimental Approach to Global Poverty for Banerjee, Duflo, and Kremer

Several decades ago, the most common ways of thinking about problems of poor people in low-income countries involved ideas like the \”poverty trap\” and the \”dual economy.\” The \”poverty trap\” was the idea low-income countries were close to subsistence, so it was hard for them to save and make the investments that would lead to long-term growth. The \”dual economy\” idea was that low-income countries had both traditional and a modern parts of their economy, but the traditional part had large numbers of subsistence-level workers. Thus, if or when the modern part of the economy expanded, it could draw on this large pool of subsistence level workers and so there was no economic pressure for subsistence wages to rise. In either case, a common policy prescription was that low-income countries needed a big infusion of capital, probably from a source like the World Bank, to jump-start their economies into growth.

These older theories of economic development captured some elements of global poverty, but in many of their details and implications have proved unsatisfactory for the modern world. (Here\’s an essay on \”poverty trap\” thinking, and another on \”dual economy\” thinking.) For example, it turns out that low-income countries often do have sufficient saving to make investments in the future. Also, in a globalizing economy, flows of private investment capital along with remittances sent back home from emigrants far outstrip official development assistance. Moreover, there have clearly been success stories in which some low-income countries have escaped the poverty trap and the dual economy and moved to rapid growth, including China, India, other nations of east Asia, Botswana, and so on. 
Of course, it remains important that low-income countries avoid strangling their own economies with macroeconomic mismanagement, overregulation, or corruption. But a main focus of thinking about economic development shifted from how to funnel more resources to these countries to what kind of assistance would be most effective for the lives of the poor. The 2019 Nobel prize in economics was awarded “for their experimental approach to alleviating global poverty” to Abhijit Banerjee, Esther Duflo, and Michael Kremer.  To understand the work, the Nobel committee publishes two useful starting points: a \”Popular Science\” easy-to-read overview called \”Research to help the world’s poor,\” and a longer and more detailed\”Scientific Background\” essay on \”Understanding Development and Poverty Alleviation.\”

In thinking about the power of their research, it\’s perhaps useful to hearken back to long-ago discussions of basic science experiments. For example, back in 1881 Louis Pasteur wanted to test his vaccine for sheep anthrax. He exposed 50 sheep to anthrax. Of those 50, half chosen at random had been vaccinated. The vaccinated sheep lived and others died. 
Social scientists have in some cases been able to use randomized trials in the past. As one recent example, the state of Oregon wanted to expand Medicaid coverage back in 2008, but it only had funding to cover an additional 10,000 people. It chose those people through a lottery, and thus set up an experiment about how having health insurance affected the health and finances of the working poor (for discussions of some results, see here and here). In other cases, when certain charter high schools are oversubscribed and use a lottery to choose students, it sets up a random experiment for comparing students who gained admission to those schools with those who did not
The 2019 laureates took this ideas of social science experiments and brought it to issues of poverty and economic development. They went to India and Kenya and low-income countries around the world. They arranged with state and local governments to carry out experiments where, say, 200 villages would be selected, and then 100 of those villages at random would receive a certain policy intervention. Just dealing with the logistics of making this happen–for different interventions, in different places–would deserve a Nobel prize by itself. 
Many of the individual experiments focus on quite specific policies. However, as a number of these experimental results accumulate, broader lessons become clear. For example, consider the question of how to improve educational outcomes in low-income countries. Is the problem a lack of textbooks? A lack of lunches? Absent teachers? Low-quality teachers? Irregular student attendance? An overly rigid curriculum? A lack of lights at home that make it hard for students to study? Once you start thinking along these lines, you can think about randomized experiments that address each of these factors and others, separately and in various combinations. From the Nobel committee\’s \”Popular Science Background\”:

Kremer and his colleagues took a large number of schools that needed considerable support and randomly divided them into different groups. The schools in these groups all received extra resources, but in different forms and at different times. In one study, one group was given more textbooks, while another study examined free school meals. Because chance determined which school got what, there were no average differences between the different groups at the start of the experiment. The researchers could thus credibly link later differences in learning outcomes to the various forms of support. The experiments showed that neither more textbooks nor free school meals made any difference to learning outcomes. If the textbooks had any positive effect, it only applied to the very best pupils. 

Later field experiments have shown that the primary problem in many low-income countries is not a lack of resources. Instead, the biggest problem is that teaching is not sufficiently adapted to the pupils’ needs. In the first of these experiments, Banerjee, Duflo et al. studied remedial tutoring programmes for pupils in two Indian cities. Schools in Mumbai and Vadodara were given access to new teaching assistants who would support children with special needs. These schools were ingeniously and randomly placed in different groups, allowing the researchers to credibly measure the effects of teaching assistants. The experiment clearly showed that help targeting the weakest pupils was an effective measure in the short and medium term.

Such experiments have been done in a wide range of contexts. For example, what about issues of improving health? 

One important issue is whether medicine and healthcare should be charged for and, if so, what they should cost. A field experiment by Kremer and co-author investigated how the demand for deworming pills for parasitic infections was affected by price. They found that 75 per cent of parents gave their children these pills when the medicine was free, compared to 18 per cent when they cost less than a US dollar, which is still heavily subsidised. Subsequently, many similar experiments have found the same thing: poor people are extremely price-sensitive regarding investments in preventive healthcare. …

Low service quality is another explanation why poor families invest so little in preventive measures. One example is that staff at the health centres that are responsible for vaccinations are often absent from work. Banerjee, Duflo et al. investigated whether mobile vaccination clinics – where the care staff were always on site – could fix this problem. Vaccination rates tripled in the villages that were randomly selected to have access to these clinics, at 18 per cent compared to 6 per cent. This increased further, to 39 per cent, if families received a bag of lentils as a bonus when they vaccinated their children. Because the mobile clinic had a high level of fixed costs, the total cost per vaccination actually halved, despite the additional expense of the lentils.

How much do the lives of low-income people change from receiving access to credit? For example, does it change their consumption, or encourage them to start a business? If farmers had access to credit, would they be more likely to invest in fertilizer and expand their output?

As the body of experimental evidence accumulates, it begins to open windows on the lives of people in low-income countries, on issues of how they are actually making decisions and what constraints matter most to them. The old-style approach to development economics of sending money to low-income countries is replaced by policies aimed at specific outcomes: education, health, credit, use of technology. When it\’s fairly clear what really matters or what really helps, and the policies are expanded broadly, they can still be rolled out over a few yeas in a randomized way, which allows researchers to compare effects of those who experience the policies sooner to those who experienced them later. This approach to economic development has a deeply evidence-based practicality. 

For more on these topics, here are some starting points from articles in the Journal of Economic Perspectives, where I labor in the fields as Managing Editor: 
On the specific research on experimental approaches to poverty, Banerjee and Duflo coauthored Addressing Absence (Winter 2006 issue), about an experiment to provide incentives for teachers in rural schools to improve their attendance, and Giving Credit Where It Is Due (Summer 2010 issue), about experiments related to providing credit and how it affects the lives of poor people. 
I\’d also recommend a pair of articles that Banerjee and Duflo wrote for JEP where they focus on the economic lives of those in low-income countries: \”the choices they face, the constraints they grapple with, and the challenges they meet.\” The first paper focuses in the extremely poor. \”The Economic Lives of the Poor (Winter 2007), while the other looks at those who are classified as \”middle class\” by global standards, \”What Is Middle Class about the Middle Classes around the World? (Spring 2008). 

From Kremer, here are a couple of JEP papers focused on development topics not directly related to the experimental agenda: \”Pharmaceuticals and the Developing World\” (Fall 2002) and \”The New Role for the World Bank\” (written with Michael A. Clemens, Winter 2016).

Finally, the Fall 2017 issue of JEP had a three paper symposium on the issues involved in moving from a smaller-scale experiment to a scalable policy. The papers are: 

Some Income Tax Data on the Top Incomes

How much income do US taxpayers have at the very top? How much do they pay in taxes? The IRS has just published updated date for 2017 on \”Individual Income Tax Rates and Tax Shares.\”  Here, I\’ll focus on data for 2017 and \”returns with Modified Taxable Income,\” which for 2017 basically means the same thing as returns with taxable income. Here are a couple of tables for 2017 derived from the IRS data.

The first table shows a breakdown for taxpayers from the top .001% to the top 5%. Focusing on the top .001% for a moment, there were 1,433 such taxpayers in 2017. (You\’ll notice that the number of taxpayers in the top .01%, .1% and 1% rise by multiples of  10, as one would expect.)

The \”Adjusted Gross Income Floor\” tells you that to be in the top .001% in 2017, you had to have income of $63.4 million in that year. If you had income of more than $208,000, you were in the top 5%,

The total income for the top .001% was $256 billion. Of that amount, the total federal income tax paid was $61.7 billion. Thus, the average federal income tax rate paid was 24.1% for this group. The top .001% received 2.34% of all gross income, and paid 3.86% of all income taxes.

Of course, it\’s worth remembering that this table is federal income taxes only. It doesn\’t include state taxes on income, property, or sales.  It doesn\’t include the share of corporate income taxes that end up being paid indirectly (in the form of lower returns) by those who own corporate stock.

Here\’s a follow-up table showing the same information, but for groups ranging from the top 1% to the top 50%.

Of course, readers can search through these tables for what is of most interest to them. But here are af few quick thoughts of my own.

1) Those at the very tip-top of the income distribution, like the top .001% or the top .01%, pay a slightly lower share of income in federal income taxes than say, the top 1%. Why? I think it\’s because those at the very top are often receiving a large share of their annual income in the form of  capital gains, which are taxed at a lower rate than regular income.

2) It\’s useful to remember that many of those at the very tip-top are not there every year. It\’s not like the fall into poverty the next year, of course. But they are often making a decision about when to turn capital gains into taxable income, and they are people who–along with their well-paid tax lawyers– have some control over the timing of that decision and how the income will be received.

3) The average tax rate shown here is not the marginal tax bracket. The top federal tax bracket is 37% (setting aside issues of payroll taxes for Medicare and how certain phase-outs work as income rises). But that marginal tax rate applies only to an additional dollar of regular income earned. With deductions, credits, exemptions, and capital gains taken into account, the average rate of income tax a as a share of total income is lower.

4) The top 50% pays almost all the federal income tax. The last row on the second table shows that the top 50% pays 96.89% of all federal income taxes. The top 1% pays 38.47% of all federal income taxes. Of course, anyone who earns income also owes federal payroll taxes that fund Social Security and Medicare, as well as paying federal excise taxes on gasoline, alcohol, and tobacco, and these taxes aren\’t included here.

5) This data is about income in 2017. It\’s not about wealth, which is accumulated over time. Thus, this data is relevant for discussions of changing income tax rates, but not especially relevant for talking about a wealth tax.

6) There\’s a certain mindset which looks at, say, the $2.3 trillion in total income for the top 1%, and notes that the group is \”only\” paying $615 billion in federal income taxes, and immediately starts thinking about how the federal government could collect a few hundred billion dollars more from that group, and planning how to spend that money. Or one might focus further up, like the 14,330 in the top .01%  who had more than $12.8 million in income in 2017. Total income for this group was $565 billion, and they \”only\” paid about 25% of it in federal income taxes. Surely they could chip in another $100 billion or so? On average, that\’s only about $7 million apiece in additional taxes for those in the top .01%. No big deal. Raising taxes on other people is so easy.

I\’m not someone who spends much time weeping about the financial plight of the rich, and I\’m not going to start now. It\’s worth remembering (again) that the numbers here are only for federal income tax, so if you are in a state or city with its own income tax, as well as paying property taxes and the other taxes at various levels of government, the tax bill paid by those with high incomes is probably edging north of 40% of total income in a number of jurisdictions.

But let\’s set aside the question of whether the very rich can afford somewhat higher federal income taxes (spoiler alert: they can), and focus instead on the total amounts of money available. The numbers here suggest that somewhat higher income taxes at the very top could conceivably bring in a few hundred billion dollars, even after accounting for the ability of those with very high income to alter the timing and form of the income they receive. To put this amount in  perspective, the federal budget deficit is now running at about $800 billion per year.  To put it another way, it seems implausible to me that plausibly higher taxes limited to those with the highest incomes would raise enough to get the budget deficit down to zero, much less to bridge the existing long-term funding gaps for Social Security or Medicare, oi to support grandiose spending programs in the trillions of dollars for other purposes. Raising federal income taxes at the very top may be a useful step, but it\’s not a magic wand that can pay for every wish list. 

Video Clips of Economists Explaining for Intro Econ Classes

I know a number of economics faculty who have been incorporating video clips into their classes. Sometimes it\’s part of a lecture presentation. Sometimes it\’s for students to watch before class. For intro students in particular, it can be a useful practice because it gives them a sense that they are being introduced to a universe of economists, not just to one professor and a textbook. The faculty member can also react to the video clip, and in this way offer students some encouragement to react and to comment as well–in a way that students might not feel comfortable reacting if they need to confront their own professor.

Amanda Bayer and Judy Chevalier have been compiling a list of video clips that may be useful for the standard intro econ class. It\’s available at the Diversifying Economic Quality (\”Div.E.Q\”) website.  Most are in the range of 3-6 minutes, although a few are longer or shorter. The economists are often talking about their own research, but in a way that the evidence can easily be incorporated into an intro presentation.

For a few examples grabbed from lectures on micro topics. Kathryn Graddy talks about her work studying the Fulton Fish Market in New York City, and how even in a highly competitive and open environment, buyers sometimes pay different prices. (Graddy also wrote an article on this topic in the Spring 2006 issue of the Journal of Economic Perspectives.)

Petra Moser discusses her work showing that \”copyright protection for 19th century Italian operas led to more and better operas being written, but the evidence also suggests that intellectual property rights may do more harm than good if they are too broad or too long-term.\”

Heidi Williams describes new data and empirical methodogies to study and advance technological change in health care markets.

Kerwin Kofi Charles looks at his empirical research on how the extent to which prejudice leads to discrimination in the labor market and  how it may affect wages of black workers. 1

Cecilia Rouse talks about her research on how change in to blind procedures for the musicians auditioning for symphony orchestras led to more women being selected.

In short, the presenters in the video clips are top-quality economists describing their own research, in ways that spark interest among students. In addition, economics has an ongoing issue with attracting women and minorities. This list is heavily tilted toward presentations by economists from those groups, and there\’s some evidence that when intro students see economists who look more like them, they may feel more comfortable expressing interest in economics moving forward. 

Opinions about Semicolons

When you live your life as an editor, you develop strange preoccupations, like the semicolon. Thankfully, Cecilia Watson has removed any temptation I might have had to spend vast amount of time on this subject by publishing Semicolon: The Past, Present, and Future of a Misunderstood Mark (Ecco, 2019). 

If you\’re the sort of person who enjoys facts and commentary about punctuation, then welcome to our smallish club. For example, you will be able to answer the trivia question: What was the first book to use a semicolon, and name the publisher, author, and typesetter? The semicolon originated in Venice in 1494, during a time of time of great innovation in symbols of punctuation. Many swirls and lines and dashes and other symbols of punctuation were invented, and mostly discarded. But apparently, a printer and publisher named Allstud Manutius was the first to combine the comma and colon, and thus to create the semicolon. The book was De Aetna, by Piertro Bembo, a dialogue about climbing Mount Etna. The Bolognese type designer Francesto Griffo created the shape of the semicolon.

I especially enjoyed some of the more grandiose denunciations of the semicolon. Watson\’s book reminded me of Paul Robinson\’s essay several decades ago in the New Republic, \”The Philosophy of Punctuation Against the semicolon; for the period\” (April 26, 1980).  Robinson wrote:

Semicolons are pretentious and overactive. These days one seems to come across them in every other sentence. “These days” is alarmist, since half a century ago the German poet Christian Morgenstern wrote a brilliant parody, “Im Reich der Interpunktionen,” in which imperialistic semicolons are put to rout by an “antisemikolonbund” of periods and commas. Nonetheless, if the undergraduate essays I see are representative we are in the midst of an epidemic of semicolons. I suspect that the semicolon is so popular because it is the first fancy punctuation mark students learn of, and they assume that its frequent appearance will lend their writing a properly scholarly cast. Alas, they are only too right. But I doubt that they use semicolons in their letters. At least I hope they don’t.

More than half of the semicolons one sees, I would estimate, should be periods, and probably another quarter should be commas. Far too often, semicolons, like colons, are used to gloss over an imprecise thought. They place two clauses in some kind of relation to one another, but relieve the writer of saying exactly what that relation is. Even the simple conjunction “and,” for which they are often a substitute, has more content, since it suggests compatibility or logical continuity. (“And,” incidentally, is among the most abused words in the language. It is forever being exploited as a kind of neutral vocalization connecting two things that have no connection whatever.)

In exasperation I have tried to confine my own use of the semicolon to demarking sequences that contain internal commas and therefore might otherwise be confusing. I recognize that my reaction is extreme. But the semicolon has become so hateful to me that I feel almost morally compromised when I use it.

Or if you prefer a pithier comment on the colon, here\’s one from Kurt Vonnegut\’s 2005 book,  A Man Without A Country:

Here is a lesson in creative writing. First Rule: Do not use semicolons. They are transvestite hermaphrodites representing absolutely nothing. All they do is show you\’ve been to college. 

June Casagrande puts the problem in more prosaic terms (\”A Word, Please: Writers who use semicolons aren’t thinking about the reader,\” Los Angeles Times, July 23, 2015) 

Here’s a fun thing you can do with your writing: Take any two simple, clear sentences and use a semicolon to mush them into one. For example, imagine you have a paragraph with just two sentences. “The alarm went off. Joe hit the snooze.” Through the magic of semicolons, you can make that just one sentence: “The alarm went off; Joe hit the snooze.” Isn’t that a great idea?

This works just as well for long sentences that you want to mush into super-long ones: “On a stormy morning in January of 2015, the alarm in Joe Jacobson’s swanky Santa Monica condo went off, ushering in the morning with an ugly screech; Joe, a hung-over stockbroker deeply immersed in a dark, disturbing dream about the woman who’d broken his heart, reached for the clock and pounded the snooze button with the force of a jackhammer.”

When you understand how semicolons work, you see that any pair of sentences can be made one. Then, when you’re done, those longer Frankenstein sentences can themselves be mushed together, and so on and so on, until every paragraph you write is just one long sentence! Neat, huh? …

I’ll kill the facetiousness here and just be blunt: Semicolons are trouble. … They’re favored by writers who are so proud they know how to use semicolons that they’ll happily shortchange readers to show off their knowledge. They’re also a popular crutch among writers who don’t know how to manage all the information they want to convey, so they use semicolons to cobble it all into a single monstrous sentence. … 

So just about any time you have two sentences next to each other, you could make the case for using a semicolon to fashion them into one longer sentence. A lot of writers do. They do so not because they believe the results will be better for the reader. They do so because they forgot the reader. They saw an opportunity to put their punctuation savvy on proud display and forgot that, as every professional writer knows, short sentences are more digestible. That’s why, to me, semicolons cause more trouble than they’re worth.

Of course, the fact that a punctuation mark or a word can be misused doesn\’t mean that it can\’t be well-used. For example, Herman Melville\’s Moby Dick is perhaps the literary champion of semicolon use. Watson makes this case at some length, concluding: 

Moby Dick .. was .. around 210,000 words, but had 4000 semicolons. That\’s one for every 52 words. The semicolons are Moby-Dick\’s joints, allowing the novel the freedom of movement is needed to tour such a large and disparate collection of themes.

She also points out that Martin Luther King Jr.\’s Letter from a Birmingham Jail makes exquisite use of the semicolon, as a way of linking together and drawing out a painful meditation in a way that forces the reader to follow along without a full stop for breath. (For example, see the paragraph that starts, \”We have waited for more than 340 years for our constitutional and God given rights.\”)

So yes, the semicolon can require care in handling. But it offers a connectedness, continuation, and flexibility in situations when a period would create a break that is be too definite and firm a break, while a comma isn\’t enough of a pause. Watson writes:

The semicolon represents a way to slow down, to stop, and to think; it measures time more meditatively than the catchall dash, and it can\’t  be chucked thoughtlessly into just any sentence in place of just any other mark. … Semicoloned sentences cannot be dashed off.

The short book also offers an excuse to roam through other rules of grammar, like whether to split infinitives. Watson tend to be in favor of good writing, but against rules. Me, I\’m in favor of good writing, but I\’m also favor knowing the rules–in part so that you can know when it makes sense to break them. 

Interview with Hal Varian: An Academic Goes to Google

Tyler Cowan conducts one of his rapid-fire, many-different-topics \”Conversations with Tyler\” with Hal Varian (\”Hal Varian on Taking the Academic Approach to Business,\” June 19, 2019). Of course, Varian was a very prominent academic economist (and textbook author) for decades, but then 20 years ago became one of the first prominent economists to move over to the tech industry. He has now spent the last 20 years at Google. The conversation is full of highlights, but here are some snippets.

On textbook prices

COWEN: Why are textbooks still priced so high? Not all textbooks, but many.

VARIAN: They are priced remarkably high, and it’s a situation where I really would like to see lower prices because, obviously, there’s a durable goods monopoly problem there. As you have more and more competition from previous editions, each of the new editions has to differ markedly from the old edition to support the pricing model. But that’s getting harder and harder to do.

In fact, a friend of mine once told me, “Having a successful textbook is like being married to a very wealthy person you don’t like much anymore.”

On the high volume of trading in financial markets

COWEN: Why do people trade so much in financial markets? It doesn’t seem Bayesian rational, right? “Oh, you want to trade with me. I’ll take that off your back.” Yet trade volume is massive. …

VARIAN: Yeah, I agree with your point that there is more trading than there should be by any reasonable model. Part of it is because people really do have differences of opinion, and they’re not fully Bayesian, so they may not find the other person’s opinion credible. We don’t really get the agreeing to disagree or, I guess, the converse. We don’t get this pushed into the model where you’ve got full agreement.

I actually did some work in this area several years ago, and it really came down to people do have a different model. We can’t agree on the model. If we don’t agree on the model, then we won’t get uniformity. …

COWEN: I don’t trade, by the way, if you’re curious to know.

VARIAN: Well, that’s good. I would say, yeah, why trade? You shouldn’t be trading. We know that just as an empirical fact.

Billionaire envy

COWEN: Does the typical American envy more the billionaire or the next-door neighbor?

VARIAN: I think the next-door neighbor. It’s interesting, the billionaires are like our instance of royalty. The people want to see what they’re doing and where they’re going out and how they dress and all those kind of stuff. But I don’t think it’s actually envy.

By the way, I know a few billionaires, and there’s a lot of cost to being a billionaire in the sense that you can’t go out in public. Maybe you need bodyguards. Doing a trip here and there is a major undertaking because of the people that have to be informed. It’s much better to be a half a billionaire, I think, than to be a billionaire.

COWEN: Do the billionaires envy each other more than poorer people envy billionaires?

VARIAN: There seems to be something of a pecking order there. It depends. Not so much in tech, I would say, but maybe in finance and Wall Street guys, I think. It’s a motivation that’s different than we see on the West Coast.

Don\’t be too quick to read the experts

COWEN: You once wrote as advice to graduate students, “Don’t look at the literature too soon.” Is that still true?

VARIAN: Yes.

COWEN: And why not?

VARIAN: Because if you look at the literature, you’ll see this completely worked-out problem, and you’ll be captured by that person’s viewpoint. Whereas, if you flounder around a little bit yourself, who knows? You might come across a completely different phenomenon. Now, you do have to look at the literature. I want to emphasize that. But it’s a good idea to wrestle with a problem a little bit on your own before you adopt the standard viewpoint.

5G

COWEN: How will 5G change my world?

VARIAN: Basically, you should think of 5G as Wi-Fi everywhere so that you’ve got a high-speed communication without having to go through any sort of special operations.

COWEN: But will it save me seven seconds a week, or will it deliver some new and exciting product that I haven’t thought of yet?

VARIAN: When you look at technologies like autonomous vehicles and things like that, they’re dealing with vast amounts of information. It’s often stored and manipulated locally, but sometimes it needs to be shared. Doing that kind of sharing will be easier if you have high-bandwidth 5G technology. But realistically speaking, for most of what you’re going to be doing, it will just save you a small amount of time.

Wooster, Ohio, and small cities

COWEN: Wooster, Ohio — I believe you’re from there. Is it economically inefficient? Should its population, over time, be reallocated to larger cities?

VARIAN: It’s funny you mention that because I grew up on a farm — apple orchard — outside of Wooster, Ohio, which is a town of about 20,000 people, and they have a nice college there. The College of Wooster. And it seems to be thriving. So, what happens when you look at these towns in the upper Midwest . . . if they have a hospital, they’ll probably survive. If they don’t have a hospital, they’re in big trouble.

COWEN: In Wooster, Ohio, do you think the value of Facebook and Google, relative to per capita income, is higher or lower than in, say, midtown Manhattan?

VARIAN: We have actually done a little research into this question but only on one aspect, namely, looking at online shopping. And I will tell you, if you live on a farm in the Midwest, you love online shopping. If you’re living in Manhattan, you’ve got a lot of opportunities to go shopping in the physical world. Those rural residents really like the internet for just that reason. The shopping, access to content, all sorts of things.

When Hayek Opposed the Nobel Prize in Economics

The Nobel Prize in economics will be announced on Monday. Thus, it is perhaps an appropriate time to revisit this post from a couple of years ago.

__________________________
As the pedants among us never tire of pointing out, the so-called \”Nobel Prize in economics\” is not literally a \”Nobel prize.\” It was not established by the original bequest from Alfred Nobel, but instead was first given in 1969, with the prize money provided by a grant from Sweden\’s central bank as part of the 300th anniversary of the founding of the bank. Thus, the award is officially \”The Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel.\” (Justin Fox gives a nice brief overview of the history here.) Although I am pedantic in many matters, this doesn\’t happen to be one of them, so I will continue following the conventional usage in calling it the \”Nobel prize in economics.\”

More interesting is that Friedrich Hayek the co-winner of the sixth Nobel prize in economics (with Gunnar Myrdal), spoke at the prize banquet in 1974 as to why the establishment of the prize was mistaken. Here\’s is Hayek\’s call to humility for economists from his speech at the Nobel banquet on December 10, 1974.

Your Majesty, Your Royal Highnesses, Ladies and Gentlemen,

Now that the Nobel Memorial Prize for economic science has been created, one can only be profoundly grateful for having been selected as one of its joint recipients, and the economists certainly have every reason for being grateful to the Swedish Riksbank for regarding their subject as worthy of this high honour.

Yet I must confess that if I had been consulted whether to establish a Nobel Prize in economics, I should have decidedly advised against it.

One reason was that I feared that such a prize, as I believe is true of the activities of some of the great scientific foundations, would tend to accentuate the swings of scientific fashion. This apprehension the selection committee has brilliantly refuted by awarding the prize to one whose views are as unfashionable as mine are.

I do not yet feel equally reassured concerning my second cause of apprehension. It is that the Nobel Prize confers on an individual an authority which in economics no man ought to possess.

This does not matter in the natural sciences. Here the influence exercised by an individual is chiefly an influence on his fellow experts; and they will soon cut him down to size if he exceeds his competence.

But the influence of the economist that mainly matters is an influence over laymen: politicians, journalists, civil servants and the public generally. There is no reason why a man who has made a distinctive contribution to economic science should be omnicompetent on all problems of society – as the press tends to treat him till in the end he may himself be persuaded to believe. One is even made to feel it a public duty to pronounce on problems to which one may not have devoted special attention.
I am not sure that it is desirable to strengthen the influence of a few individual economists by such a ceremonial and eye-catching recognition of achievements, perhaps of the distant past.

I am therefore almost inclined to suggest that you require from your laureates an oath of humility, a sort of hippocratic oath, never to exceed in public pronouncements the limits of their competence.

Or you ought at least, on confering the prize, remind the recipient of the sage counsel of one of the great men in our subject, Alfred Marshall, who wrote: \”Students of social science, must fear popular approval: Evil is with them when all men speak well of them\”.

Hayek is quoting from an comment from Marshall which appears in \”In Memoriam: Alfred Marshall,\” a speech given by A.C. Pigou in 1924 and published as part of a Memorials of Alfred Marshall volume in 1925 (pp. 81-90). The fuller quotation attributed to Marshall (on p. 89) is:

Students of social science, must fear popular approval: Evil is with them when all men speak well of them. If there is any set of opinions by the advocacy of which a newspaper can increase its sales, then the student who wishes to leave the world in general and his country in particular better than it would have been if he had not been born, is bound to dwell on the limitations and defects and errors, if any, in that set of opinions: and never to advocate them unconditionally even in ad hoc discussion. It is almost impossible for a student to be a true patriot and to have the reputation of being one in his own time.

The Economics Nobel: Who Might Have Won?

Next Monday the 51th Nobel prize in Economics will be awarded. Allen R. Sanderson and John J. Siegfried offer some perspective on the first 50 years of the economics award and offers some context with the other Nobel prizes in \”The Nobel Prize in Economics Turns 50\” (American Economist, 2019, 64:2, pp. 167–182). They offer background on the genesis of the prize, how its official name as evolved, and the ages, academic backgrounds, and big idea that spanned several awards.

For those interested in more detail about past Nobel prize-winners in economics, I strongly recommend the Nobel website itself. Especially for winners in the last few decades, there is rich infomation about why the prize was given, often with an autobiographical essay from the winner, and of course the address given by the prize-winner.

Here, I\’ll pass along a couple of lists from Sanderson and Siegfried. The Nobel is only given to living people, so there are inevitably some economists worthy of consideration for the prize who died after 1969 without receiving the award. I was also intrigued by their list of how many Nobel prize-winners in economics had a direct tie to a previous winner.

Here\’s their list of economists who were alive in 1969, but died without receiving an economics Nobel, and \”who certainly would have had advocates\” for winning the prize.

  1. Frank Knight (1972). One of the founders of the “Chicago School of Economics,” he is best known for his 1921 book, Risk, Uncertainty and Profit.
  2. Alvin Hansen (1975). Macroeconomist and public policy adviser, often referred to as “the American Keynes,” he is most noted for development (with Hicks) of the “investment-savings” and “liquidity preference-money supply” (IS-LM) macroeconomics model.
  3. Oskar Morgenstern (1977). Princeton economist, coauthor of Theory of Games and Economic Behavior (1944, with John von Neumann).
  4. Joan Robinson (1983). Cambridge economist known for her work on monopolistic competition (The Economics of Imperfect Competition, 1933) and coining the term monopsony.
  5. Piero Sraffa (1983). Italian economist and considered the neo-Ricardian school founder owing to his Production of Commodities by Means of Commodities (1960).
  6. Fischer Black (1995), part creator of the Black–Scholes equation on options pricing, surely would have shared the 1997 Nobel with Scholes and Merton for devising a model for the dynamics of a financial market containing derivative investment instruments.
  7. Amos Tversky (1996). A cognitive psychologist, who undoubtedly would have shared the 2002 Nobel Prize with his friend and frequent collaborator Daniel Kahneman (and Vernon Smith).
  8. Zvi Griliches (1999). A student of Schultz and Arnold Harberger at Chicago, he is best known for work on technological change (the diffusion of hybrid corn in particular) and econometrics.
  9. Sherwin Rosen (2001). Labor economist with far-ranging contributions in microeconomics, he is perhaps best known for his 1981 American Economic Review article “The Economics of Superstars,” and his 1974 Journal of Political Economy article outlining how the market solves the problem of matching buyers and sellers of multidimensional goods.
  10. John Muth (2005). Doctoral advisee of Herbert Simon, he is considered—mainly formulated on the microeconomics side—as the originator of “rational expectations” theory.
  11. John Kenneth Galbraith (2006), long-time Harvard economist, was a prolific writer (The Affluent Society (1958), The New Industrial State (1967)), public intellectual, and liberal political activist.
  12. Anna Schwartz (2012). A National Bureau of Economic Research monetary and banking scholar, she was a coauthor with Milton Friedman of A Monetary History of the United States, 1867-1960 (1963).
  13. Martin Shubik (2018). A doctoral advisee of Morgenstern and collaborator with Nash, at Princeton, he was a long-time Yale professor of mathematical economics and outstanding game theorist.

To this list, one could certainly add more of their contemporaries, for example (in alphabetical order), Anthony Atkinson (2017), William Baumol (2017), Harold Demsetz (2019), Evsey Domar (1997), Rudiger Dornbusch (2002), Henry Roy Forbes Harrod (1978), Harold Hotelling (1973), Nicholas Kaldor (1986), Jacob Mincer (2006), Hyman Minsky (1996), and Ludwig von Mises (1973), among many others.

Sanderson and Siegfried also point out that a substantial number  Nobel laureates in economics had another laureate as a dissertation adviser, For example:

  • Jan Tinbergen was an adviser of Koopmans.
  • Paul Samuelson was an adviser for Klein and Merton.
  • Kenneth Arrow advised the research of Harsanyi, Spence, Maskin, and Myerson.
  • Wassily Leontief advised Samuelson, Schelling, Solow, and Smith.
  • Richard Stone supervised the research of both Mirrlees and Deaton.
  • Franco Modigliani was Shiller’s adviser.
  • James Tobin advised Phelps.
  • Merton Miller advised Eugene Fama’s dissertation, and Fama advised Scholes’s.
  • Robert Solow supervised the work of Diamond, Akerlof, Stiglitz, and Nordhaus.
  • Thomas Schelling was Spence’s adviser.
  • Edward Prescott advised Kydland, with whom he shared the 2004 Nobel Prize.
  • Eric Maskin advised Tirole.
  • Christopher Sims advised Hansen.
  • Simon Kuznets supervised both Friedman and Fogel.

Sanderson and Siegfried also pass along perhaps the most common joke about the economics Nobel prize:

[A]s a well-known quip has it, “economics is the only field in which two people can share a Nobel Prize for saying opposing things.” The 1972 Prizes awarded to Myrdal and Hayek spring to mind, as would the 2013 awards to Fama and Shiller.

Foreign Exchange Markets: $6.6 Trillion Per Day

It\’s hard to understand at an intuitive level the  difference between millions, billions, and trillions. I sometimes try to describe it this way. One million seconds in the past is about 11 days ago. One billion seconds is 11,000 days, which about 30 years ago. One trillion second is about 30,000 years ago, which would be the time period when early rock-paintings were done, when the main human inventions of the time were the oven, pottery, and twisting fibers to make rope.

So the difference between a million and a billion is the difference between what happened the weekend before last, and what happened in 1989. The difference between a billion and a trillion is the difference between how long ago it was that world headlines were about Tiananmen Square demonstrations and the Berlin Wall coming down, compared with how long ago human culture was in its hunter-gatherer cave-dwelling stage .

Mull over that comparison with this fact: foreign exchange markets trade $6.6 trillion per day, up from $5.1 trillion per day in the previous survey. The authoritative statistics on foreign exchange markets come from the Triennial Central Bank Survey conducted by the Bank of International Settlements. The data for the latest round of the survey, completed in April 2019, is now available (September 16, 2019).

At first glance, this volume of trading seems like it must be a mistake. Total world exports of goods and services are about $25 trillion per year. Add in the investment flows across borders for 2018, which were $2 trillion in foreign direct investment, $1.9 trillion in portfolio investment, and $2 trillion in other financial transactions, mainly bank loans (according to UNCTAD). But these annual totals don\’t get anywhere close to the volume of for $6.6 trillion per day in foreign exchange markets.

The obvious conclusion is that most foreign exchange trading isn\’t about facilitating exports and imports, nor about facilitating flows of international investment. Instead, it\’s about financial transactions that seek to address the risks of shifts in foreign exchange rates, or to profit directly from those shifts. For example, about half the foreign exchange market is swaps contracts (that is a contract where one party is owed a certain amount in one currency, over some period of time, and another party is owed an amount in a different currency, over some different period of time, and they agree to swap these payments).

There\’s also an interesting insight into how foreign exchange markets work from looking at what currencies are used the most. It turns out that the US dollar is involved in 88% of all foreign exchange deals, either as the currency being sold or being bought. Again, this isn\’t about facilitating trade or investment related to the US economy. Instead, it\’s because if there is a deal happening between, say, the Brazilian real and the South African rand, the usual pattern of foreign exchange markets behind the scenes would be to convert both  currencies into US dollars, and then to convert out to the desired currency.  This pattern is one dimension of what is meant when people say that the US is the global \”reserve currency.\”

With regard to other currencies, the BIS report notes:

The US dollar retained its dominant currency status, being on one side of 88% of all trades. The share of trades with the euro on one side expanded somewhat, to 32%. By contrast, the share of trades involving the Japanese yen fell some 5 percentage points, although the yen remained the third most actively traded currency (on one side of 17% of all trades). … As in previous surveys, currencies of emerging market economies (EMEs) again gained market share, reaching 25% of overall global turnover. Turnover in the renminbi, however, grew only slightly faster than the aggregate market, and the renminbi did not climb further in the global rankings. It remained the eighth most traded currency, with a share of 4.3%, ranking just after the Swiss franc.

Given the size and complexity of foreign exchange markets, and their potentially very rapid reaction times, it\’s little wonder that they often move in ways which have long been hard for economists to explain. Every now and then, I post on the bulletin board beside my office a quotation from Kenneth Kasa back in 1995: \”If you asked a random sample of economists to name the three most difficult questions confronting mankind, the answers would probably be: (1) What is the meaning of life? (2) What is the relationship between quantum mechanics and general relativity? and (3) What\’s going on in the foreign exchange market. (Not necessarily in that order).\”

Waste and Worse in US Health Care Spending

About 25% of all US health care spending is wasted, according to an article just published in the Journal of the American Medical Association by William H. Shrank, Teresa L. Rogstad, and Natasha Parekh (\”Waste in the US Health Care System Estimated Costs and Potential for Savings,\” October 7, 2019). They write: 

In this review based on 6 previously identified domains of health care waste, the estimated cost of waste in the US health care system ranged from $760 billion to $935 billion, accounting for approximately 25% of total health care spending …  Computations yielded the following estimated ranges of total annual cost of waste: failure of care delivery, $102.4 billion to $165.7 billion; failure of care coordination, $27.2 billion to $78.2 billion; overtreatment or low-value care, $75.7 billion to $101.2 billion; pricing failure, $230.7 billion to $240.5 billion; fraud and abuse, $58.5 billion to $83.9 billion; and administrative complexity, $265.6 billion.

This isn\’t a new problem. For a few decades now, I\’ve been seeing estimates that up to one-third of US health care spending is wasted. Also, the US estimate isn\’t actually all that different from international estimates: an OECD study a couple of years ago estimated that \”around one-fifth of health expenditure makes no or minimal contribution to good health outcomes.\”  But since the US spends about 18% of GDP on health care, while other high-income countries spend about 11% of GDP on health care, wasteful health care spending hurts even more in the US.

JAMA also offers some comments on these results. I was struck by Donald M. Berwick\’s essay: \”Elusive Waste: The Fermi Paradox in US Health Care.\” 

In 1950, at lunch with 3 colleagues, the great physicist Enrico Fermi is alleged to have blurted out a question that became known as “the Fermi paradox.” He asked, “Where is everybody?” referring to calculations suggesting that extraterrestrial life forms are abundant in the universe, certainly abundant enough that many of them should have by then visited our solar system and Earth. But, apparently, none had.

Health care in the United States has its own version of the Fermi paradox. It involves the strong evidence of massive waste … With US health care expenditures exceeding $3.5 trillion annually, 25% of the total would amount to more than $800 billion per year of waste (more than the entire 2019 federal defense budget, and as much as all of Medicare and Medicaid combined). Even 5% of the total cost is more than $150 billion per year (almost 3 times the budget of the US Department of Education).

That is worth repeating: by many pedigreed estimates, annual waste in US health care equals or exceeds the entire annual cost of Medicare plus Medicaid.

But, to paraphrase Fermi, “Where is it?” … The paradox is that, in an era of health care when no dimension of performance is more onerous than high cost, when many hospitals and clinicians complain that they are losing money, when individuals in the United States are experiencing financial shock at absorbing more and more out-of-pocket costs for their care, and when governments at all levels find that health care essentially confiscates the money they need to repair infrastructures, strengthen public education, build houses, and upgrade transportation—in short, in an era when health care expenses are harming everyone—as much as $800 billion in waste (give or take a few hundred billion) sits untapped as a reservoir for relief. Why? … 

What Shrank and colleagues and their predecessors call “waste,” others call “income.” People and organizations (for-profit and not-for-profit) making big incomes under current delivery models include very powerful corporations and guilds in a nation that tolerates strong influences on elections by big donors. Those donors now include corporations whose “right” to “free speech” as “persons” has been certified by the US Supreme Court, conferring on them an unlimited license to support political candidates financially. When big money in the status quo makes the rules, removing waste translates into losing elections. The hesitation is bipartisan. For officeholders and office seekers in any party, it is simply not worth the political risk to try to dislodge even a substantial percentage of the $1 trillion of opportunity for reinvestment that lies captive in the health care of today, even though the nation’s schools, small businesses, road builders, bridge builders, scientists, individuals with low income, middle-class people, would-be entrepreneurs, and communities as a whole could make much, much better use of that money.

I was also struck by the comments from Karen E. Joynt Maddox and Mark B. McClellan in their short essay, \”Toward Evidence-Based Policy Making to Reduce Wasteful Health Care Spending.\”  They argue that various \”incentive-based\” or \”value-based\” systems that purport to provide incentives to reduce wasteful health care spending don\’t work all that well. These schemes have been complicated, not aligned across providers, without buy-in from clinicians, costly to implement–and in general have not led to any broad redesign of care. They sketch an alternative path to health care reform that looks like this:

The current piecemeal approach, which imposes complexity and additional implementation costs on clinicians, hospitals, and health systems, should evolve to a simpler and more holistic approach to value-based payment. Primary care should move toward a capitated payment system, with a streamlined set of quality measures and financial supports for keeping people healthy and out of the hospital. Specialty care will likely need a combination of a primary care–like chronic disease management track and add-on “bundles” for procedures, with quality measures relevant to specialized care comprising the core of quality measurement. Hospital care should be structured within such bundles where feasible, with clear quality measures around safety, and the move of accountable care organizations from fee-for-service–based models to organizations paid on a person level should continue.

Finally, although it\’s not part of this set of JAMA articles, I\’ll add that the issues of the US health care system go beyond wasted opportunities to make better use of resources. There have been prominent studies for a couple of decades now suggesting that medical errors in the US lead to the deaths of either tens of thousands or even several hundred thousand people every year. As one partial measure, the Agency for Healthcare Research and Quality (which is part of the US Department of Health and Human Services) publishes a \”scorecard on hospital-acquired conditions.\” The AHRQ scorecard issued in January 2019 offers this good news/bad news report on \”hospital-acquired conditions,\” or HACs:

The 2014 rate started at 99 HACs per 1,000 hospital discharges and is estimated at 86 HACs per 1,000 discharges for 2017. … Based on the HAC reductions seen in 2015, 2016, and 2017 compared with 2014, AHRQ estimates a total of 910,000 fewer HACs occurred than if the 2014 rates had persisted through 2017. These HAC reductions lead to estimates of approximately $7.7 billion in costs saved and approximately 20,500 HAC-related inpatient deaths averted from 2015 through 2017. Data reported in 2016 estimated that from 2011 through 2014, HAC reductions totaled 2.1 million, and these reductions resulted in approximately $19.9 billion in cost savings and 87,000 fewer HAC-related inpatient deaths. 

So the good news is 87,000 fewer deaths along with other prevented health and monetary costs since 2010. The bad news is that the US health care system was causing those deaths and costs up through 2010, and with 86 hospital-acquired conditions per 1,000 discharges in 2017, it\’s still causing high costs. Of course, one could also add other costs with a close linkage to health care, like prescription drug overdoses.

A huge amount of US public attention has focused on the issue of providing health insurance coverage and health care to all, and rightly so. But there should be enough space in our brains to also consider the issues of high costs and wasteful healthcare spending and reducing the health costs that are being created by the US healthcare system.