Interview with Stephen Levitt: My Career and Why I’m Retiring From Academia

Jon Hartley has a wonderful interview with Steven Levitt at the “Capitalism and Freedom in the 21st Century” podcast (“Steven D. Levitt (Freakonomics co-author and University of Chicago Economics Professor) on His Career And Decision To Retire From Academic Economics,” March 7, 2024). Among a number of other topics, there’s a lot of good dishing about the University of Chicago economics department and prominent economists there, along with the future of economics and academics. Here, I’ll just offer some snippets that particularly caught my eye. One caution: the transcript is unedited, so read it with caution. As one example of many, there are places where “U of C” as a reference to the University of Chicago is spelled “UFC.”

On how Levitt ended up as an economics major:

But let me tell you how I did get into economics. It was not in a thoughtful and well-organized way. I was the worst kind of undergraduate student. I only tried to take easy courses. I just tried to get good grades. I didn’t care at all about anything intellectual, but I did already believe in markets, even though I had no economic training. And I went to Harvard and my view was if a thousand people are taking a class, that must be a good class and an easy class. And so, I took all the thousand person classes that they offered at Harvard. And first and foremost, among those was Ec 10. And I took it only because a thousand people were taking it. And I remember not too long into the class, maybe five or six lectures in the class and we were doing comparative advantage. And as the teacher went through it, I thought, “What a joke. How can they be teaching this?” Everyone knows, everyone understands comparative advantage. It’s the most obvious thing in the world. It’s five-year-old know that. And as I walked out of class, my best friend, who was also in the class, with me, said, “My God, that was the most confusing lecture I’ve ever heard in my life.” And I said, “What are you talking about?” He said, “It makes no sense to me. What is it you’ve been talking about?” And that was the first inkling I had that maybe I thought like an economist. And honestly, I only did economics because it came naturally to me. And I never liked it, per se. I never had this sense that economics was powerful. It’s just the only thing I was good at. And so… I just backed into it and I never had any intention, so I majored in economics, but I never had any intention of going further. I wanted to go into business.

On how a paper Levitt wrote as a graduate student about more police reducing crime entered the policy realm:

I wrote a paper on the effective police on crime, and I found, unlike other people before me, that looked like more police reduced crime. Perhaps not surprising, but it was very surprising to the criminologists. … And I think, if I remember correctly, and I might be confusing my stories, I think Alan Krueger put together a binder of papers for Bill Clinton every week. And Alan said that Bill was an amazing thinker, and he would really look at these papers. And he said, in particular, because they’re trying to get this crime bill passed that would add 100,000 police officers, Bill Clinton had gone over my paper, and he said you could see all the notes in the margin and had lots of questions and then Janet Reno apparently carried my paper around in a briefcase dozens of copies and gave it to anyone she could because she was trying to influence the senators and the representatives to vote on behalf of the Bill Clinton’s crime bill and I say I got completely the wrong idea. I had this idea that like you said wow the power of research and anyone can do it and you do good research and people recognize and it effects policy, I mean, I was so confused. It took me years and years to understand that, number one, usually nobody cares at all about your research. No matter how much you love it, it never gets any attention. Number two, the quality of my research had nothing to do with it being passed around. It was being passed around Washington because it was the only paper that supported the position that they had already chosen. Right, the policy outcome they want chosen first and then they went for papers. And I’m sure they were disappointed that the only article they could find that it all supported them was by some grad student, but they took what they could. And what I read, the real lesson I learned over time is that I don’t actually think that my research or even my writing, more popular writing, has ever really fundamentally changed the way any politician thought about anything and that it’s just, I’ve come to a different conclusion which is that it is incredibly hard to influence any policy or anyone’s beliefs by doing research.

On the negotiation between Levitt and Stephen Dubner on how to divide up the advance and royalties for Freakonomics:

[T]he publishers were interested in me doing a book, but I categorically said no. And eventually, Stephen Dubner’s agent called me up and said, hey, why don’t you write a book with Steven Dubner?” And I said, “Number one, I have no interest in writing a popular book. Number two, I’m sure Dubner doesn’t want to write a book with me because we honestly didn’t get along that well when he came out to interview me the first time.” But we agreed to talk and we shared, and we had a real commonality, which is that neither of us really wanted to write this book. Neither of us thought anybody would read a book if we did write it. But we both were kind of, prostitutes in some sense. And so, for the right amount of money, we were willing to write this book. And interestingly, the right amount of money turned out to be similar for both of us. And so much to our surprise, we got offered, I don’t know, three times that amount of money to write the book. And then the only thing that stood in the way of us writing the book is we had to figure out how to divide the profits, the payments. And Dubner, I don’t remember the exact numbers, but Dubner came to me and he said, “Hey, I know it’s uncomfortable to talk about this, but we need to decide to split.” And he said, “I was thinking 60 /40.” And I said, “I was actually thinking 2 /3, 1 /3.” And he said, “Oh, I’m just not willing to write this book for 1 /3.” And I said, “No, no, I was thinking 2 /3 for you and 1 /3 for me.” And he said, I was thinking 60 % for you and 40 % for me. So, it’s the easiest negotiation ever. We settled on 50 /50, we both felt like we got a lot of surplus and we’ve had a great relationship ever since.

Why retire and become an emeritus professor at age 57?

I think two different forces at work here. The first one is that maybe between five and 10 years ago, I worked on three or four projects that I was just incredibly excited about that I felt were some of the best research that I’d ever done … [T]hese were four papers that I was really excited about and collectively they had zero impact. They didn’t publish well by and large, nobody cared about them and I remember looking at one point at the citations and seeing that collectively they had six citations. I thought, my god, what am I doing? I just spent the last two years of my life and nobody cares about it. And I really think it’s true that the way I approached economic problems, without a fashion, without a vogue, and for better or worse, probably the profession is better for having a different set of standards than I was used to meeting up with. And that was really discouraging to me. And you combine that with the idea, with the fact that along with Stephen Dubner, we’ve got this media franchise where Dubner’s podcast Freakonomics Radio gets a couple million downloads a month. And if I want to get a message out, I can get millions of people through a different medium. It just didn’t make sense to me to keep on puttering around, doing all this work, spending years to write papers that no one cared about when I had other ways of getting my ideas out. And really my interests were elsewhere. I didn’t get any thrill. … The question I should ask myself is why didn’t I retire a long time ago? It made no sense. I’ve just been, I’ve thought, I’ve known for years, it’s the wrong place for me to be. And it just took me a long time to figure out how to extricate myself from academics. And I’m so glad I’m doing it. It’s good for everyone. It doesn’t make any sense to, it feels to me awful to be in a place where I’m not excited and where I’m not contributing materially. So, for me, it feels like a breath of fresh air to be saying, “Hey, I’m not going to be an academic anymore. I’m going to be doing what I really love to do.”

One Year Since the Meltdown at Silicon Valley Bank: Commercial Real Estate and Ongoing Threats

One year ago in March 2023, Silicon Valley Bank melted down, quickly followed by similar meltdowns at Signature Bank and First Republic Bank. Measured by the nominal size of bank assets, these were three of the biggest four US bank failures in history. (The failure of Washington Mutual Bank in 2008 remains the largest.) Was this just a one-off, or a problem that has already been fixed? Or do the underlying causes continue to linger?

Tobias Adrian, Nassira Abbas, Silvia L. Ramirez, and Gonzalo Fernandez Dionis take on these issues in “The US Banking Sector since the March 2023 Turmoil: Navigating the Aftermath (IMF Global Financial Stability Notes, March 2024).

I’ve discussed different angles on failure of Silicon Valley Bank a few times on this blog already: for example, see “An Autopsy of Silicon Valley Bank from the Federal Reserve” (April 28, 2023) , “Was Bailing out the Silicon Valley Bank Depositors the Right Decision?” (June 6, 2023), “Why are the Recent US Bank Runs So Much Faster?” (June 20, 2023), and “Spreading Accounts Across Banks for the Deposit Insurance” (November 29, 2023). For an all-in-one-place overview with some additional time for reflection, I recommend Andrew Metrick’s essay, “The Failure of Silicon Valley Bank and the Panic of 2023” in the Winter 2024 issue of  Journal of Economic Perspectives (38:1, 133-52).

(Full disclosure: I’ve been Managing Editor of the Journal of Economic Perspectives since 1986, so the articles therein are necessarily of interest to me. However, the articles are also all freely available to all compliments of the American Economic Association, from the most recent to the first issue of the journal.)

In some ways, the banks that failed had problems that were not widely shared. Bank deposits in the US are insured up to $250,000 by the Federal Deposit Insurance Corporation. Thus, no one with deposits smaller than that amount has reason for concern about whether their bank might fail. However, companies might at sometimes hold more in their bank accounts, and companies at the Silicon Valley Bank, in particular, were holding a lot of money that they had received from venture capitalists. Indeed, a whopping 94% of the deposits at Silicon Valley Bank were above the $250,000 limit, and thus uninsured. This is not the situation for most banks.

However, some problems of Silicon Valley Bank were more widespread across the banking sector. In particular, if you are holding a financial asset that pays a fixed rate of interest, like most US Treasury bonds, and interest rates rise, then the value of that lower-interest-rate bond will decline. Many banks hold US Treasury bonds, although Silicon Valley Bank held more than most.

As the IMF authors note,

In March after the failure of SVB [Silicon Valley Bank] and SBNY [Signature Bank], depositors and investors became concerned, first about liquidity and then about the financial soundness of banks matching a certain profile with various attributes including: (1) sizable deposit outflows; (2) high concentrations of uninsured deposits; (3) reliance on borrowing and higher use of liquidity facilities, (4) substantial unrealized losses; and (5) high exposure to CRE [commercial real estate0. Although, the high level of uninsured deposits and sizable deposit outflows were unique characteristics of the failed institutions (SVB, SBNY, and FRB [First Republic Bank]), our analysis identifies a group of small and regional banks that have sizable uninsured deposits to total deposits, sizable unrealized losses, high concentration to CRE, and increased reliance on borrowings after the March 2023 stress.

The issue of commercial real estate wasn’t a problem for Silicon Valley Bank. But many regional banks have made substantial loans for those who are building commercial real estate. With the shift to a work-from-home economy, the value of commercial real estate has dropped and the risk of these loans has increased. The authors note:

Beyond the unrealized losses due to higher interest, the credit risk carried by some institutions, particular their exposure to CRE [commercial real estate], is at the center stage of investors’ fears today. Small and regional banks are substantially exposed with about two thirds of the $3 trillion in CRE exposures in the US banking system (Figure 4, panel 1). In January 2024, shifts in market expectations regarding the timing and pace of interest rate cuts in the United States, coupled with substantial losses announced by a large bank heavily exposed to CRE, prompted a 10 percent decline in the regional bank’s stock index.

The high concentration of CRE exposures represents a serious risk to small and large banks amid economic uncertainty and higher interest rates, potentially declining property values, and asset quality deterioration. … One-third of US banks, mostly small and regional banks, held exposures to CRE exceeding 300 percent of their capital plus the allowance for credit losses, representing 16 percent of total banking system assets (Figure 4, panel 2).

The IMF authors are not doom-saying here. They refer to this issue as a “weak tail” of banks, by which they mean that it takes the confluence of all five factors they mention to cause a bank failure. But reading between the lines, I wouldn’t be surprised to see the bank regulator forcing mergers on some of the small and regional banks in the “weak tail,” and trying to do so before a bank’s financial situation becomes dire.

A Slowdown in Global Agricultural Productivity

The growth of agricultural output matters. About 700 million people in the world live below a poverty line of consuming $2.15 per day, and if that poverty line is raised to $3.85 per day, more than 2 billion people are below it. Raising the standard of living for the very poor requires higher agricultural output. Greater productivity in global agriculture matters as well. In most low-income countries, half or even three-quarters of worker are still in agriculture, and part of economic development is that rising agricultural productivity helps fuel a migration of those workers to higher-paying jobs in other sectors. Even in a high-income context, it’s worth remembering that agriculture isn’t just about food, but that there are a large and growing number of bio-based consumer and industrial products that aren’t eaten.

Thus, it’s big news that the growth rate of agricultural output and agricultural productivity is slowing down. The US Department of Agriculture keeps the statistics, which Keith Fuglie, Stephen Morgan, and Jeremy Jelliffe use to generate this graph in “World Agricultural Output and Productivity Growth Have Slowed” (USDA Amber Waves website, December 7, 2023)

The black line at the top shows annual growth for global agricultural output. Of course, these annual rates compound over time. Over a decade, a drop 0.8% per year (what happens from the 2001-2010 period to the 2011-2021 period) means that total agricultural output will be more than 8% lower than it would otherwise have been.

The bars then break down that total growth into possible causes. Land expansion, irrigation intensification and input expansion (like machinery, labor, fertilizers, and pesticides) are measurable. The common approach is that whatever can’t be accounted for by measurable factors is called “productivity growth”–which becomes a nebulous category that includes everything from improved cultivation methods to better seeds. As you can see, essentially all of output growth that is lower by 0.8% per year reflects lower productivity growth.

The issues of low-income people and economies with half or more of the workers in agriculture are especially salient in economies across Africa. For an overview of the issues in improving technology there, see Tavneet Suri and Christopher Udry. 2022. “Agricultural Technology in Africa.” Journal of Economic Perspectives, 36 (1): 33-56. They emphasize the problem of enormous heterogeneity across the agricultural sector in countries of Africa, which has made it harder to discover and diffuse new agricultural technologies.

Basic Income Proposals, Labor Market Interactions, and Good Jobs

The primary argument for a government-provided basic income is that it will make those with low incomes better off, by increasing their financial resources and by allowing them to negotiate for better jobs. But the extent to which this conclusion holds true will depend on individual circumstances of the recipient, and what other adjustments happen in response to a basic income. For example, what happens if a basic income is counted as “income” and eligibility for other public support is correspondingly reduced? What happens if firms, recognizing that lower-income workers have an alternative source of support, look for ways to impose charges on employees (say, for training or for uniforms)? For that matter, what if owners of rental housing see a universal basic income as an opportunity to raise the rent? Of course, these kinds of counterreactions in government policy and markets are not what advocates of a universal basic income desire–but that doesn’t mean they won’t happen.

David A. Green delivered the Presidential Address to the Canadian Economic Association on the topic: “Basic income and the labour market: Labour supply, precarious work and technological change” (Canadian Journal of Economics, November 2023, pp. 1195-1220).

Green focuses in particular on potential interactions between a universal basic income and labor markets, and on how economic models which define work as a negative and leisure as a positive can miss important aspects of the debate. Green writes:

In our main models, people value leisure and would always prefer a life of living on benefits without work (if their earnings options place them near the benefit level of income). Supporting such a life for the least productive people in society is considered a policy success in a system in which walls are built to prevent others from joining them. My sense from the discussions with people who might actually need the benefits is that a more accurate model would be one in which people have a basic desire for working (for reasons of self-respect, feelings of self-efficacy and social connection) but would also, in any moment, prefer more leisure. … At the same time, many of the people who are in need of support face multiple barriers to work (health issues, poor work records, insufficient housing, etc.) that mean they face low paying, short duration jobs. That is, we should not think in terms of models in which they can be moved into permanent working states but ones in which they will repeatedly find themselves in need of support mixed with jobs they take in their search for self-respect and connection. The obvious trade-off is that we would want to create a system that provides support in this sporadic work pattern without creating incentives to take up sporadic work patterns for people who might not otherwise face them. I do not know of any papers that take this perspective when thinking about designing transfer policies. …

Returning to the main theme of this paper, a perspective that places more emphasis on supporting self-respect and social respect and their relationship to work has implications for how we think about a basic income. On the one hand, a basic income does well under this perspective. At the very least, it implies providing benefits without the judgement associated with work requirements. In that sense, it is closer to the recommendations of optimal tax theory. On the other hand, it does not pay enough direct attention to the relationship between respect and work. People could (and might) use a basic income as monetary support for their desire to find work and get training. But this is left entirely up to them. Surely, a system that provides direct supports would be more effective.

Here is Green’s argument as to whether a basic income is likely to be a useful tool for reducing the number of “bad jobs” and increasing the number of “just labour exchanges” by giving workers the freedom to leave a job and look for alternatives:

I think the answer is no for two reasons. The first is an ethical argument. If a key problem with bad jobs is that they damage workers’ self-respect and rob them of autonomy then a cash payment is not the correct response. A key tenant of justice is that the remedy should operate in the same realm as the problem. Paying someone cash to compensate them for an insult may, for example, heighten the feeling of being insulted rather than remedy the problem. The right realm would be to alter the work arrangements to remove the direct insult to dignity. I think this is a key problem with economic models when thinking about the justice of worker–firm relationships because we write our models in terms of individual utility and, ultimately, monetary equivalents, thus taking us away from a consideration of different realms of exchange. Put another way, when we broaden our lens to consider outcomes in terms of justice then we are led to consider jobs as locations of self-respect and self-image rather than simply as vehicles of increased income and reduced leisure. In that context, it becomes harder to think about reducing everything to monetary equivalents.

The second reason is that our models miss the nature of the problem at hand. Changing workplace conditions in a situation in which individuals do not freely move among jobs requires the workers to act collectively. Indeed, I believe that an accurate model of the creation of workplace amenities is one that involves both the employer and the set of employees, with a critical mass of the latter needing to act in order for changes to occur. Put another way, the set of employees at a workplace is a community and, in fact, is a key community for a person’s notions of self-respect. One individual in that community deciding to walk away from the job because she does not like the level of amenities will not be sufficient to alter the work conditions because frictions in the labour market prevent market discipline of the kind that happens in a simple neoclassical model. Using a basic income as a response in this situation involves hoping that each of the individuals takes their personal backstop as a means to engage in a community action. This might happen but there is no clear reason why it would. Maybe it would allow them all to walk away from a bad job but, as we have seen, it is not clearly the case that this will lead to a reduction in the proportion of bad jobs because it funds walking away from good jobs (in the hopes of finding an even better job) as well.

A problem with a basic income as a response to an over-abundance of bad jobs is that it ignores both of these issues. It acts as if money is the right realm and that backstopping individual effort is sufficient to correct the problem. But that would be true only in a neoclassical world and in that world, we are either in a compensating differential equilibrium—in which case no response is actually needed—or the more directly effective response is regulation not cash. In a world where individual workers do not have sufficient agency through the market to bring about change, a basic income is not the right response.

To put this another way, at least some advocates of a basic income have, in an indirect way, considerable faith in the operation of a free-market labor system. They believe that with a universal basic income, the decentralized negotiations between workers and employers, along with movement between jobs, will provide an improved incentive for employers to offer better jobs. On the other side, Green argues that better jobs are unlikely to result from the market-oriented dynamic: instead, he argues that better jobs are the result of collective action between workers who are staying on the job and their employers, as well as from regulation and government programs tied to issues like assistance with training, child care, and transportation costs.

A Century of Movement Toward Central Bank Independence

There been a controversy in recent decades about the independence of central banks from the rest of the government. The ongoing concern–especially have high inflation rates rocked the US and other countries around the world in the 1970s and 1980s–was that governments have a bias toward inflation. Politicians want the central bank to help finance their spending; conversely, politicians will often oppose anything that might slow the economy, like higher interest rates. However, if the central bank can be given a clear goal, like a low rate of inflation, then it can push back against the inflationary biases of the rest of the government.

Thus, many central banks around the world, like the European Central Bank, have a clear-cut target for low inflation as their only policy goal. The US Federal Reserve has a “dual mandate,” to keep inflation low but also to fight recession. But central banks are often expected to deal with other tasks as well, like dealing with financial crashes and participating in bank and financial regulation.

Here, I won’t try to resolve these disputes, but instead will simply point out the long-term pattern: the arguments for greater central bank independence seem to be prevailing. David Romelli discusses “Trends in central bank independence: a de-jure perspective” (BAFFI CAREFIN Centre Research Paper N. 217, February 2024; for a readable short overview, https://cepr.org/voxeu/columns/recent-trends-central-bank-independence).

Basically, Romelli collects records of legal changes across 42 categories that reflect more or less central bank independence. I’ll provide the full list of categories below, but they can be grouped into categories like how the governor and board of the central bank are chosen and the rules of governance; how monetary policy is set; whether the central bank has specific statutory goals; limitations on the central bank loaning directly to the government; whether the central bank is financially independent; and rules governing reporting and disclosure for the central bank.

Taking these together, the basic theme is that central bank independence doesn’t change much up until about 1990, when there is a dramatic shift upward. The shift occurs across countries at all levels of economic development: high-income, middle-income, low-income. The momentum toward greater central bank independence stalls for a few years after the 2007-09 global financial crisis, when legislators and central banks were understandably focused on other topics, but has resumed since then.


Romelli writes:

This paper presents an extensive update to the Central Bank Independence – Extended (CBIE) index, originally developed in Romelli (2022), extending its coverage for 155 countries from 1923 to 2023. The update reveals a continued global trend towards enhancing central bank independence, which holds across countries’ income levels and indices of central bank independence. Despite the challenges which followed the 2008 Global financial crisis and the recent re-emergence of political scrutiny on central banks following the COVID-19 pandemic, this paper finds no halt in the momentum of central bank reforms. I document a total of 370 reforms in central bank design from 1923 to 2023 and provide evidence of a resurgence in the commitment to central bank independence since 2016. These findings suggest that the slowdown in reforms witnessed post-2008 was a temporary phase, and that, despite increasing political pressures on central banks, central bank independence is still considered a cornerstone for effective economic policy-making.

Here’s the list of 42 characteristics affecting central bank independence in Romelli’s index:

The Psychology of Poverty: Is There Evidence for a Trap?

It seems plausible that being poor will have psychological effects. It seems at least possible that some of these psychological effects could, in turn, make it harder to escape poverty. Johannes Haushofer and Daniel Salicath explore the evidence on these and related issues in “The psychology of poverty: Where do we stand?( Social Philosophy and Policy, 2024, 40:1, 150-184; a pre-print draft is also available as NBER Working Paper 31977). The paper is part of a 12-paper symposium in this issue on topics of “Poverty, Agency, and Development.”

The evidence that lower-income people feel better-off when given money is pretty clear. The harder question is whether poverty has particular psychological effects that affect decision-making in a way that may cause one to remain in poverty. This thesis was put forward with force and clarity in a 2013 book by Sendil Mullainathan and Eldar Shafir, Scarcity: Why Having Too Little Means So Much. I wrote about this line of argument here at the Conversable Economist about a decade ago, mentioning work at the time by Haushofer and Mullainathan and Shafir, and also quoting some passages from George Orwell’s 1937 book, The Road to Wigan Pier, in which he describe (with frustration and empathy) how many poor and working-class people have become accustomed to living on a a “fish and chips” standard that combines unhealthy food, cheap luxuries, gambling, and electronic entertainment (in his day, the radio), in a way that helps to offset their limited economic prospects.

In the more recent essay, Haushofer and Salicath describe the earlier hypothesis by Mullainathan and Shafir in this way:

The authors posit that poverty consumes cognitive resources, including attention, executive control, and working memory, thereby impairing decision-making. Specifically, they suggest that scarcity both reduces overall mental bandwidth and redirects attention toward salient, income-relevant features of a decision problem at the expense of less salient but potentially important other aspects. Two landmark studies accompanied publication of the book. Anuj Shah, Mullainathan, and Shafir showed in a series of lab experiments that participants experiencing scarcity in terms of their experimental “budgets” (of points or time) tended to “over-borrow” from their experimental budgets. Anandi Mani and coauthors primed low- and high-income participants in a mall in New Jersey with financial scenarios and reported reduced executive control and fluid intelligence when low-income participants thought about difficult financial problems. They also report lower performance on similar tasks among sugarcane farmers in India before the harvest (when resources are scarce) relative to after the harvest.

Haushofer and Salicath review a wide range of literature since then on how poverty or negative shocks to income affect psychological behaviors especially relevant to economic outcomes like cognitive functioning, short time horizons, trust, or excessive risk-taking. The evidence comes from a wide array of contexts: real-world evidence from shocks related to harvests for farmers in low-income countries; laboratory experiments where people work through structured games and scenarios in a classroom; natural settings where people experience greater or lesser stress on incomes and finances; and others. Alternative, there are studies that try to affect the aspirations of poor people (many of these studies are in low-income countries), sometimes by showing videos or presenting seminars that emphasize the possibilities for work and achievement, as well as by treating mental health issues more directly.

The authors summarize the evidence this way:

There has been significant progress in recent years, in particular, in establishing causality in the effect of income on psychological well-being; elucidating the precise functional form of psychological well-being with respect to income (satiation); and improving our understanding of the importance of relative income. Most saliently, the causal effect of income on psychological well-being is now robustly established. Research on the effects of scarcity and stress on economic decision-making has also made great strides in the past few years. However, the picture that emerges from these literatures is not as clear; individual studies are often statistically weak, provide conflicting evidence, and replication efforts have not always been successful. While the last word has perhaps not been spoken, in our view, the case for a poverty trap that operates through the effects of poverty on stress, decision-making, and cognition is currently not strong.

Blinder on the Gap Between Economics and Politics

Alan Blinder delivered the 2023 Daniel Patrick Moynihan Lecture in Social Science and Public Policy to the American Academy of Political and Social Science last October on the topic “Economics and Politics: On Narrowing the Gap” (October 25, 2023; for text, see the Peterson Institute for International Economics website at https://www.piie.com/commentary/speeches-papers/economics-and-politics-narrowing-gap; to watch the presentation and accompanying discussion on YouTube, see https://www.youtube.com/watch?v=jaF4Cjundy4). Here are some of the nuggets that caught my eye.

On the extent to which much economists influence politics

May I start by dispelling a myth? Perhaps because economists are frequently trotted out to support or oppose policies, perhaps because we have a Council of Economic Advisers right in the White House, perhaps because the powerful Federal Reserve—so ably represented here today–is dominated by economic thinking, many people believe that economists have enormous influence on public policy. In truth, apart from monetary policy, we don’t. Almost a half century ago, George Stigler (1976, p. 351), later a Nobel prize winner, wrote that “economists exert a minor and scarcely detectable influence on the societies in which they live.” Stigler was no doubt exaggerating to make his point. But he had a point. And things have not changed much since.

On the time horizons of politicians

It is a commonplace that politicians have excruciatingly short time horizons. It is often said that they can’t see past the next election, but the truth is far worse. The political pros who advise politicians often can’t see past the next public opinion poll, maybe not even past the next tweet. Their natural time horizon extends only until that evening’s news broadcasts, if that long.

Advice to economists who want to participate in policy debates

Let me now turn to the minority of economists who wish to get engaged in policy. I have two suggestions to offer here … Both suggestions cut deeply against the grain. They are not what we teach in graduate school. …

I have just emphasized that political time horizons are too short for sound economic policy. But it’s also true that economists’ time horizons are too long for politics. Specifically, we economists typically focus on the “equilibrium” or “steady state” effects of a policy change. For example: What will happen eventually after a change in the tax code or a trade agreement? Don’t get me wrong. Those questions are important and highly pertinent to policymaking. We should not forget about them. But they are close to irrelevant in the political world because people don’t live in equilibrium states. Rather, they spend most of their lives in one transition or another. Yet economists often brush off “transition costs” as unimportant details. We shouldn’t. …

The process of adjustment to the superior free-trade equilibrium may be lengthy and painful, involving job losses, reduced incomes for some, decimated communities, and more. Economists know all this but don’t pay it sufficient heed. Politicians, by contrast, live in the real world of ever-present transition costs. They may not be in office long enough to enjoy the steady-state benefits. …

My second suggestion is that economists pay far more attention to issues of fairness rather than doting almost exclusively on efficiency, as we often do. In politics, perceived fairness almost always trumps efficiency—and politicians understand that. … Think, for example, about debates over the tax code, which are hardy perennials in Congress. Economists have a beautiful theory of optimal taxation, built around maximal efficiency. But that theory plays absolutely no role in congressional debates. Zero. Discussions of fairness, on the other hand, dominate the debates. And we get the tax mess that we do. …

So here’s my advice to economists interested in actual–as opposed to theoretical–policymaking. Don’t forget about efficiency. It matters. We are right about that. But we may have to content ourselves with nibbling around the edges, below the political headline level, to make the details of a complex policy package less inefficient. Call it the theory of the third or fourth best.

The Oddness of February

I understand why the calendar adds an extra day every four years. The revolution of the earth around the sun is approximately 365 and one-quarter days. Every four years, that adds up to one additional day, plus some extra minutes. The modest rounding error in this calculation is offset by steps like dropping the extra day of leap year for years ending in “00.”

But my question is why February has only 28 days in other years. After all, January has 31 days and March has 31 days. If those two months each donated a day to February, then all three months could be 30 days long, three years out of four, and February could be 31 days in leap years. Every other month is either 30 or 31 days. Why does February only get 28 days?

The answer to such questions leads to a digression back into the history of calendars. In this case, Jonathan Hogeback writing at the Britannica website tells me, it seems to settle on the Roman king Numa Pompilius back around 700 BCE, before the start of the Roman Empire. The ancient Roman calendar of that time had a flaw: it didn’t have nearly enough days. As Hogeback writes:

The Gregorian calendar’s oldest ancestor, the first Roman calendar, had a glaring difference in structure from its later variants: it consisted of 10 months rather than 12. In order to fully sync the calendar with the lunar year, the Roman king Numa Pompilius added January and February to the original 10 months. The previous calendar had had 6 months of 30 days and 4 months of 31, for a total of 304 days. However, Numa wanted to avoid having even numbers in his calendar, as Roman superstition at the time held that even numbers were unlucky. He subtracted a day from each of the 30-day months to make them 29. The lunar year consists of 355 days (354.367 to be exact, but calling it 354 would have made the whole year unlucky!), which meant that he now had 56 days left to work with. In the end, at least 1 month out of the 12 needed to contain an even number of days. This is because of simple mathematical fact: the sum of any even amount (12 months) of odd numbers will always equal an even number—and he wanted the total to be odd. So Numa chose February, a month that would be host to Roman rituals honoring the dead, as the unlucky month to consist of 28 days.

This discussion does explain why February would be singled out, since it was the month of rituals honoring the dead. In Numa’s calendar, the 355-day year would be made up of 11 months that had the lucky odd numbers of 29 or 31 days, plus unlucky February.

The discussion also explains why months that start with the prefix “Oct-” or eight, “Nov” or nine, and “Dec-” or ten, are actually months 10, 11, and 12 in the calendar. Those names were originally part of a 10-month calendar year.

But questions remains unanswered: Why did the Romans of that time view odd numbers as lucky, compared with unlucky even numbers? I suppose that explaining any superstition is hard, but I’ve never seen a great explanation. In a Dartmouth course on “Geometry in Art and Architecture,” some course describes Pythagorean feelings about odd and even numbers. For those of you keeping score at home, Pythagoras lived about two centuries after Numa Pompilius. The Dartmouth course material summarizes aspects of “Pythagorean Number Symbolism”:

Odd numbers were considered masculine; even numbers feminine because they are weaker than the odd. When divided they have, unlike the odd, nothing in the center. Further, the odds are the master, because odd + even always give odd. And two evens can never produce an odd, while two odds produce an even. Since the birth of a son was considered more fortunate than birth of a daughter, odd numbers became associated with good luck.

Various mentions of the luckiness of odd numbers recur over time. A few centuries later in the first century BCE, the poet Virgil has the character Alphesiboeus (a shepherd who sings about love rituals) say in Eklogue VIII (from the A.S. Kline translation):

Bring Daphnis home, my song, bring him home from town.

First I tie three threads, in three different colours, around you

and pass your image three times round these altars:

the god himself delights in uneven numbers.

Bring Daphnis home, my song, bring him home from town.

Or leaping ahead a millenium-and-a-half, at the start of Act V of the The Merry Wives of Windsor, Shakespeare has Falstaff say:

Prithee, no more prattling. Go. I’ll hold. This
is the third time; I hope good luck lies in odd numbers.
 Away, go. They say there is divinity in odd
 numbers, either in nativity, chance, or death.
Away.

While I acknowledge this history of a belief in odd numbers, as a person born on an even day of an even month in an even year, I’m not predisposed to accept it. But it’s interesting that modern photographers have a guideline for composing photographs called the “rule of odds.” Rick Ohnsman at the Digital Photography School, for example, describes it this way:

This is where the rule of odds comes into play, a deceptively simple yet powerful tool in your photographic arsenal. It’s all about arranging your subjects in odd numbers to craft compositions that are naturally more pleasing to the eye. Unlike more static guidelines, the rule of odds offers a blend of structure and organic flow, making your images both aesthetically pleasing and impressively compelling.

The revised calendar of Numa Pompilius couldn’t last. With only 355 days, it didn’t reflect the actual period of the earth revolving around the sun, and thus led to further revisions which are a story in themselves.

But when you think about it, the question of February having 28 days all goes back to Numa Pompilius and the superstitions about odd numbers. The modern calendar has 365 days in a typical year. You might think that the obvious way to divide this up would be to start off with 12 months of 30 days, and then add five days. Indeed, the ancient Egyptians had a calendar of this type, with five “epagomenal” or “outside the calendar days added each year.

The preference over the last two millennia, at least since the time of Julius Caesar, is to have 12 months, with a few of them being a day longer. But even so, why not in a typical year have five months of 31 days, and the rest with 30? The “problem,” I think, is that most months would then have unlucky totals of an even number of days. By holding February to 28 days rather than 30, you can redistribute two days from February and have 31 days in January and March. Thus, you can have only four months with an even total of 30 days every year (“Thirty days hath September, April, June, and November …”), and seven months always with the luckier odd total of 31 days. In leap years, when February has 29 days, then eight months have an odd number of days. I think this makes February 29 a lucky day?

Pricing at Wendy’s: A Surge or a Discount?

The fast-food chain Wendy’s finds itself in a kerfuffle over comments by its chief executive officer Kirk Tanner that it may shift to digital menu boards, which would in turn allow the firm to adjust prices by time of day. The usual suspects immediately asserted that Wendy’s was about to commit “surge pricing” by raising prices at peak meal hours; the company quickly responded that it was only going to use the mechanism for “discount pricing” at non-peak hours.

Of course, many restaurants have traditionally had “early bird” or “happy hour” specials for those eating in non-peak hours. Similarly, movies and shows often have matinee pricing, where tickets for shows in the afternoon are cheaper than those in the evening, or tickets for shows from Sunday through Thursday night are cheaper than on Friday or Saturday night. During the holiday shopping season, lots of retailers have cheaper prices for those who show up to buy during the opening hours of the store on certain dates, and more expensive prices for those who come later.

The trick for any seller, of course, is that it need to brand all such time-of-day or time-of-week variation as a “discount” for the cheaper times, which is laudable, rather than as a “surge” during the more expensive times, which would be condemned. In a similar spirit, gas stations and other retailers often list a “discount price” for those who pay cash, but never a “surge price” for those who pay with credit cards. Public transit systems often have a price “discount” for those who travel at off-peak hours, but never a “surge” price for those who travel at peak times. The Wendy’s CEO committed executive malpractice by not immediately emphasizing how the company was going to offer price discounts, not price surges.

A few years back, I wrote about dynamic pricing in a number of contexts: some historical episodes when Coca-Cola talked about raising or lowering prices at soft drink machines on hot days; when Disneyland or certain ski resorts charges higher prices at peak times than off-peak times; when prices for rideshare services like Uber go up in situations where quantity demanded of rides is high; when the price of electricity is adjusted up during periods of high demand; and when toll roads charge more when traffic is especially congested. There are of course complex issues across these cases. But the knee-jerk claim that prices should generally be constant across a wide range of conditions, including time-of-day and day-of-week, except that “discounts” are socially beneficial while “surges” are socially harmful, substitutes outrage-of-the-day rhetoric for any attempt at making meaningful distinctions.

Warren Buffett on How Size Has Done Him In

Each year, investor extraordinaire Warren Buffett publishes a letter to the shareholders of Berkshire Hathaway, a personalized view of how he sees the previous year, the role of capitalism, and (this year) the investment strategies of his sister Bertie. But this year, he also admits that the company he has built has no future possibility of eye-popping growth, because of how large it has grown.

Think of it this way, say that you start off with an investment firm that is worth 0.1% of the net worth of the top 500 companies in the US. You do a superior job of investing that money, and double your share to 0.2%. You then double again and again and again to 0.4%, 0.8%, 1.6%, 3.2%, and 6.4%. Notice that each of these steps would make you a very successful investor. But notice also that each doubling gets harder, because each doubling requires a greater gain in the size of your firm relative to the market. Doubling from a small base is a lot easier that doubling from a large base. And Buffett is saying that his firm has become so large that future doublings are somewhere between hard and impossible. Here’s his comment from this year’s letter:

Our goal at Berkshire is simple: We want to own either all or a portion of businesses that enjoy good economics that are fundamental and enduring. Within capitalism, some businesses will flourish for a very long time while others will prove to be sinkholes. It’s harder than you would think to predict which will be the winners and losers. And those who tell you they know the answer are usually either self-delusional or snake-oil salesmen. At Berkshire, we particularly favor the rare enterprise that can deploy additional capital at high returns in the future. Owning only one of these companies – and simply sitting tight – can deliver wealth almost beyond measure. …

This combination of the two necessities I’ve described for acquiring businesses has for long been our goal in purchases and, for a while, we had an abundance of candidates to evaluate. If I missed one – and I missed plenty – another always came along.

Those days are long behind us; size did us in, though increased competition for purchases was also a factor. Berkshire now has – by far – the largest GAAP net worth recorded by any American business. Record operating income and a strong stock market led to a yearend figure of $561 billion. The total GAAP net worth for the other 499 S&P companies – a who’s who of American business – was $8.9 trillion in 2022. (The 2023 number for the S&P has not yet been tallied but is
unlikely to materially exceed $9.5 trillion.)

By this measure, Berkshire now occupies nearly 6% of the universe in which it operates. Doubling our huge base is simply not possible within, say, a five-year period … There remain only a handful of companies in this country capable of truly moving the needle at Berkshire, and they have been endlessly picked over by us and by others. Some we can value; some we can’t. And, if we can, they have to be attractively priced. Outside the U.S., there are essentially no candidates that are meaningful options for capital deployment at Berkshire. All in all, we have no possibility of eye-popping performance.

Nevertheless, managing Berkshire is mostly fun and always interesting. On the positive side, after 59 years of assemblage, the company now owns either a portion or 100% of various businesses that, on a weighted basis, have somewhat better prospects than exist at most large American companies. By both luck and pluck, a few huge winners have emerged from a great many dozens of decisions. And we now have a small cadre of long-time managers who never muse about going elsewhere and who regard 65 as just another birthday …

With that focus, and with our present mix of businesses, Berkshire should do a bit better than the average American corporation and, more important, should also operate with materially less risk of permanent loss of capital. Anything beyond “slightly better,” though, is wishful thinking.

Pointing out that it’s easier to have fast growth rate from a tiny base than from a larger base is a lesson worth remembering in many contexts. As one example, the economy of China had very rapid growth for some decades, but starting from an exceptionally low base. There are multiple reasons for China’s current economic woes, but one unavoidable issue is that when you get bigger, growth rates get harder to achieve.