How Does the American Economic Association Invest?

The American Economic Association, the professional association for academic economists, holds about $50 million in investment assets. How do the professional economists invest?

To be clear, I’m not giving away any secret information here. The AEA is a nonprofit, and it needs to report publicly. A “Report of the Treasurer” is published each year in the AEA Papers and Proceedings: for the 2022 version, with spending in 2020 and 2021 and projections for 2022, see pp. 650–654). A Budget and Finance Committee manages the Association’s financial assets. Back in April 2017, it set the following “target portfolio allocations,” which have remained the same since then:

Total Stock Market Index Fund 35%
FTSE All-World Ex-US Fund 25%
Long-Term Investment Grade Fund 12%
Value Index Fund 10%
REIT Index Fund 5%
PIMCO PFORX Foreign Bond Fund 5%
Intermediate Bond Fund and local operating cash 8%

Perhaps this goes without saying, but this is not intended as personal investment advice. There is no reason for any individual investor to be following the AEA. Planning for your personal retirement, for example, raises quite different issue than the choices of an professional academic association. That noted, some of the broader principles guiding the choice of investment may be useful.

  1. The AEA does not deal with individual stocks, only with funds.
  2. The funds are passively invested, most of them through Vanguard, with very low costs of managing the money, not actively invested in a way that would involve higher costs–and would also involve choosing an active investment strategy.
  3. The AEA allocations have remained the same since 2017: clearly, there is no attempt here to jump in and out of various allocations.
  4. About 70% of the funds are invested in stock markets.
  5. The AEA makes some effort to diversify into international stock markets. The first two entries on the list operate through Vanguard, which is to say that they are low-cost passive funds, the first in the US market and the second focused on outside-the-US stock markets.
  6. The AEA also diversifies outside the stock market, into real estate and bonds.
  7. The one category where the AEA is relying to some extent on the irrationality of markets is the “Value Index Fund.” According to Vanguard, “This fund invests in stocks of large U.S. companies in market sectors that tend to grow at a slower pace than the broad market; these stocks may be temporarily undervalued by investors.” Another way to look at this fund is that it gives large US companies a slightly greater weight in the overall portfolio than they would otherwise have.  

You may also be wondering: Why does the American Economic Association even have $50 million? Starting back in 1969, the AEA got entrepreneurial. It started a project called EconLit, which creates an index of all articles in professional journals of economics. Back in 1969, this project was on paper and covered 182 journals. Now, it is a searchable online index covering over 1,000 economics journals, with abstracts describing many of the articles and with the ability to search the text of many of the underlying articles, too. This index is sold by subscription to research libraries around the world. In 2020, EconLit cost about $800,000 to produce, but brought in about $4.5 million in revenues–about 40% of total AEA revenues and by far the largest single amount. In turn, the AEA uses this financial cushion for other purposes: to keep membership dues low and to support nine different academic journals, as well as annual meetings, an index of job openings for academic economists, and other activities.

So one main reason the AEA has money to invest is EconLit. The other big reason is that the stock market has surged: for example, the S&P 500 stock index rose from about 800 back in 2009 to above 4,000 now, even after the declines since last December. In that situation, it would take a certain perverse genius not to make money in the stock market. But of course, what the stock market has given it can also take away, and that $50 million financial cushion as measured in September 2021 would already be substantially smaller today.

The New Opportunity Zones

The idea of an “opportunity zone” is to offer an incentive, whether through a grant or a a tax break, for people and firms to invest in a place with relatively low levels of income and jobs. Proposals along these lines have floated around for awhile under various names like “empowerment zones,” “enterprise communities,” “renewal communities,” and others. But the currently existing program of opportunity zones is a tax break that was discussed and proposed during President Obama’s administration, but became law as part of the Tax Cuts and Jobs Act signed into law by President Trump. The most recent issue of Cityscape, published by the US Department of Housing and Urban Development, has a symposium including research titles “An Evaluation of the Impact and Potential of Opportunity Zones” (2022, volume 24, number 1). I’ll provide the full Table of Content below. Here is an overview of some main insights.

In his introduction to the symposium, Daniel Marcin gives this quick overview of how the law works:

Opportunity Zones allow investors with capital gains to reinvest that money into Qualified Opportunity Funds (QOF), which then invest in OZs. Doing so has three main benefits.

1. The capital gains tax due on the original investment sale is deferred until the sale of the QOF investment or the end of 2026, whichever comes first.

2. If the investor holds the QOF investment for 5 years, the cost basis of the investment is
increased by 10 percent. If held for 7 years, or 2 additional years, the cost basis increases by an additional 5 percent.

3. If the QOF investment is held for 10 years, then no tax is due on any gains on the OZ
investment (IRS, 2021a).

As Marcin is quick to point out, this structure means that all evaluations of the program are preliminary. It took some time for the IRS to write up detailed rules of how they would work, and there has not been time for these holding periods of 5, 7, and 10 years to be completed.

An obvious question is what qualifies as an “opportunity zone.” The short answer is that state governors could make a list, within their state, of “low-income” areas defined as “generally a census tract with (a) a 20-percent poverty rate or higher, (b) a median family income of 80 percent or less than the metropolitan median family income, or (c) if not located in a metropolitan area, a median family income less than 80 percent of the state median family income.” For the record, a “census tract” is an area that usually contains from 1,200 to 8,000 people. There was also some wiggle room about choosing census tracts right next to these low-income areas. “Executives could select 25 percent of all tracts that were eligible, with a minimum of 25 in a state. In total, 8,766 OZs were designated.”

As you can imagine, there was also added complexity in how to decide if a business was really “in” an opportunity zone. For example, what if an existing business opened a small office in an opportunity zone? Not eligible. What if the business was physically based in the opportunity zone, but left the opportunity zone to provide services (like a housecleaning or landscaping company? Not eligible. In oversimplified terms: “The IRS ruled that, to qualify as an Opportunity Zone business, that business must earn at least 50 percent of its gross income from activity inside an OZ. … Opportunity Zone business property must be used “substantially all” of the time in an OZ.”

In their essay, Blake Christian and Hank Berkowitz argue: “The federal OZ program is arguably one of the most flexible, impactful, and bipartisan tax programs for helping disadvantaged communities in half a century.” They cite estimates that $75 billion had been invested in the opportunity zone program by the end of 2020. They point out that the program doesn’t just cover real estate, but also applies energy projects (like companies installing solar panels or insulation), infrastructure, active businesses, and public-private partnerships. Notice also that to get the tax breaks, the investor needs to commit to the investment for some years–in other words, this isn’t a tax break for flipping properties or other quick in-and-out investments. A common pattern seems to be that if the opportunity zone program can be combined with other business incentives, thus providing additional capital to (for example) a pre-existing program aimed at building more low-income housing.

It is fiendishly difficult to evaluate a program like opportunity zones. It’s unlikely that when governors were choosing among the census tracts eligible for opportunity zones that they did so at random. They may well have picked areas that they thought were more “ripe” for development. The funds going into opportunity zones could have been invested elsewhere–perhaps even in a neighboring census tract. In a broad sense, the gains from a program like this come from the sensible idea that investments to create jobs and economic activity in a depressed area have a bigger social benefit than similar investments in another area, because there are bigger spin-off benefits of improving economic activity in a depressed area.

That said, various studies in this issue give some promising results, for a program that has only existed for three years. One study found that OZ areas had seen faster growth of jobs and enterprises by about 2%. Another study found “that OZ tracts saw lower home price appreciation than did non-selected tracts before 2017. After 2017, however, OZ tracts had a 6.8-percent greater home price appreciation through 2020 over the eligible-but-not-selected tracts.” Another study found that if gentrification is defined as a “greater-than-average change in the percentage of tract residents older than age 25 with a bachelor’s degree,” then in Washington, DC, ” most OZs do not have a gentrification score higher than the city average.” In short, the gains from opportunity zones seem real, if modest.

_________________________

Symposium
An Evaluation of the Impact and Potential of Opportunity Zones

Guest Editor’s Introduction,” by Daniel Marcin

Enhancing Returns from Opportunity Zone Projects by Combining Federal, State, and Local Tax Incentives to Bolster Community Impact,” by Blake Christian and Hank Berkowitz

Missed Opportunity: The West Baltimore Opportunity Zones Story,” by Michael Snidal and Sandra Newman

The Failure of Opportunity Zones in Oregon: Lifeless Place-Based Economic Development Implementation Through a Policy Network,” by James Matonte, Robert Parker, and Benjamin Y. Clark

A Typology of Opportunity Zones Based on Potential Housing Investments and Community Outcomes,” by Janet Li, Richard Duckworth, and Erich Yost

Classifying Opportunity Zones—A Model-Based Clustering Approach,” by Jamaal Green and Wei Shi

The Impact of Qualified Opportunity Zones on Existing Single-Family House Prices,” by Yanling G. Mayer and Edward F. Pierzak

Gentrification and Opportunity Zones: A Study of 100 Most Populous Cities with D.C. as a Case Study,” by Haydar Kurban, Charlotte Otabor, Bethel Cole-Smith, and Gauri Shankar Gautam

Collaboration to Support Further Redevelopment and Revitalization in Communities with Opportunity Zones,” by Michelle Madeley, Alexis Rourk Reyes, and Rachel Bernstein

Tax Cuts, Jobs, and Distributed Energy: Leveraging Opportunity Zones for Equitable Community Solar in the D.C. Region,” by Sara Harvey

Census Tract Boundaries and Place-Based Development Programs,” by Joseph Fraker

Competition Via Variety

One of the standard concerns about measuring economic output is that it doesn’t take variety into account, just the total amounts bought and sold. If variety increases, consumers are likely to feel better off from the wider array of choices, but if they same amount is bought and sold, GDP does not care about variety. This seems problematic, especially because the number of choices available to shoppers seems to be rising, and the large packaged good companies are using additional variety–rather than, say, lower prices or higher quality of existing goods–as a primary method of competing.

Áine Doris reviews and discusses this trend in “Do Shoppers Have Too Many Choices? US consumer goods are proliferating rapidly, with implications for consumers and companies” (Chicago Booth Review, May 23, 2022). Who among us has not stood for a few minutes, gobsmacked in the grocery, trying to sort out which size and variety of snack chip should go into the cart? As Doris writes: “Bags of Tostitos Scoops! tortilla chips share shelf space with bags of Tostitos Scoops! Multigrain or Tostitos Hint of Lime, while cans of Diet Coke vie with those of Coca-Cola California Raspberry and Coca-Cola Cherry Vanilla Zero Sugar. The seemingly endless options stretch beyond the food and drink aisles to shelves offering diapers, detergents, stationery, soaps, coffee, cosmetics, and more.”

A study looking at 118 products groups in consumer packaged goods found that the “number of `niche’ alternative products increased by 4.5 percent a year from 2004 to 2016,” which works out to a total increase of 70% in that time.

However, this growth in variety has been accompanied by consolidation in the number of companies in the consumer packaged goods category. Another study found: “[A]lmost half of US product markets—including food and beverages, health care, apparel, and electronics, as well as a slew of nonmanufacturing markets, such as insurance and financial services—were `highly concentrated’ or dominated by two or three multinationals. Over time, the likes of General Mills, Nestlé, Procter & Gamble, and Unilever have systematically acquired and subsumed other consumer brands.” 

At least so far, this combination of more dominant firms selling a greater variety of products seems to have worked out fine for consumers. During this time, markups for these products remained about the same, so consumers were in effect getting more variety without boosting the profit margins of firms. Doris writes: ” Although there are fewer companies offering products in a certain sector (say, food products), there are more companies offering them in a specific market (such as chips). That is creating more competition at the level of individual products, which keeps prices low.”

In part, what seems to be happening is that small companies producing these kinds of consumer packaged goods do the innovation, and then if a small company has a successful product, it is bought by one of the big companies. The big companies have economies of scale in production, distribution, and marketing, so they can sell the new product at a lower cost than the smaller firm.

Access to the web, targeted advertising and home delivery is another major change. A small-scale producer doesn’t have to scratch and claw for shelf space in a major grocery or retail chain. Instead, it’s possible to use targeted web advertising and promotions, which are quite inexpensive relative to the historical patterns of large-scale advertising on television or in print, to bring in a group of customers who can then have the product delivered directly to their home. Doris cites some examples:

The upshot is that it got easier for newcomers to disrupt markets. The internet eradicated the need for a huge marketing budget or even a physical store. Warby Parker started advertising and selling affordable glasses online, then upturned the eyeglass industry. In apparel, Bonobos set about redefining the retail experience in men’s apparel by blending physical and online platforms; consumers can secure a good fit in a store but do their actual shopping online. (Walmart took notice and bought the company in 2018.) Dollar Shave Club, known for its viral video advertisements on social media, destroyed Gillette’s stranglehold by creating an alternative to expensive razors and selling them online, giving users the option to purchase upgrades or add-ons. Unilever purchased it in 2016. Diapers.com, founded in 2005, was scooped up by Amazon five years later.

From a broad social point of view, it’s not clear how to think about these patterns. On one hand, the growth of big firms that could exploit market power to raise profits is a traditional source of concern. The idea that these big firms could entrench themselves by buying the smaller firms that might grow up and have a chance of challenging their dominance someday is concerning, which has been a real concern with the big tech companies like Google and Facebook. On the other side, the current system does give entrepreneurs some powerful incentives, because if they can just get their company established they can cash out by selling it to a bigger firm. At least so far, consumers are getting more variety and (at least in the data that is available) the big consumer packaged good firms have not been charging higher markups.

As Doris points out, there is also a possible conflict brewing here. The new consumer product goods are often designed around values like being locally produced, or being healthier to consume. If this kind of product is bought up by a big company, but keeps its brand name, will consumers still feel the same way about it a few years later? Does the big company even know about how to promote a locally-produced, health-based product?

A final issue is called the “paradox of choice,” which is the idea that consumers who are overloaded by choices may be less happy with their ultimate choice, because of a fear that they missed out on something better, or may even decide to shy away from buying at all. There is limited evidence of this happening in some contexts. For example, one study looked at restaurant delivery platform that kept adding more options. Up to some point, the additional options brought in more business; beyond that point, additional options seemed to discourage possible consumers and brought in less business. But from a broad perspective, the appetite for additional variety in consumer packaged goods seems to be very strong, and so the pattern of large firms using variety to compete within narrow categories of products seems likely to continue.

The Shape of the Federal Budget

The release by the Congressional Budget Office of the “The Budget and Economic Outlook:
2022 to 2032″
(May 2022) offers an excuse to review a few facts and projections about the federal budget.

A first fact is that although one can always hear assertions about how federal spending or federal taxes have been climbing out of control for years or decades, the actual spending and taxing totals for the federal government have been remarkably stable for a half-century. The horizontal dashed blue line at 20.8% shows average federal spending as a percent of GDP from 1972 through 2021. The horizontal dashed green line at 17.3% shows average federal revenues during that time. Yes, spending spiked during the pandemic and to a lesser extend during the Great Recession of 2008-2010. There have been other times, like the dot-com boom of the 1990s when spending was well below its average and taxes were well above. But it is remarkable to me that the projections for the next year or so have total federal spending and taxes very much in line with what they have been for a half-century.

But although the totals for federal spending haven’t changed much, the main areas of spending have shifted substantially–and are projected to keep shifting. Social Security spending is up from 3.2% of GDP in 1972 to 4.9% of GDP now, and headed higher. Major health care programs, Medicare and Medicaid leading the way among them, were 1% of GDP in 1972, and now 5.8% of GDP and headed higher. Net interest spending rises and falls, but it’s now 1.6% of GDP and headed higher. Conversely, defense spending was 6.5% of GDP back in 1972, and it’s now at 3.1% of GDP and trending lower. As I have noted before, the nature of the federal government is shifting in front of our eyes. It’s more and more about cutting checks for older people and health care providers, and less and less about defense or about building social infrastructure of various kinds.

However, after five decades of more-or-less stability, the pressures are building on US government deficits and debt. This figure shows the standard measure of federal debt held by the public. (The reason for “held by the public” is so that it doesn’t count situations like the trust fund for Social Security, where the federal government holds its own debt–and thus where one part of the government has borrowed from the rest.) As you can see, the rise in the debt/GDP ratio from 2009 up to the present is similar to the rise that happened when the costs of World War II were paid for with borrowed money.

However, federal debt/GDP dropped sharply after World War II, and kept falling for decades. At present, the already high debt/GDP ratio is projected to stay roughly the same for a decade or so, and then to climb higher, due largely to the projections for higher health care and Social Security spending, combined with the way in which this higher spending would drive up federal interest payments.

There are options here. One is to accept that the elderly share of the US population is growing, and that Social Security and federal health care costs are going to keep rising, which implies that federal spending is going to undergo a long-term pattern of moving above it’s half-century average of 20.8% of GDP. The sharp increases in borrowing projected after the mid-2030s are likely to be undesirable, so there will be pressure for federal taxes to start edging higher as well. The alternative is to think seriously about America’s health care and retirement systems, and to stop kicking those two particular cans down the road.

A Short Tale of Publication Expenses and the Internet

The total annual budget for the Journal of Economic Perspectives, where I work as Managing Editor, is projected to be $736,000 in 2022. If you go back 15 years to 2007, you find that total expenses for the journal were $863,000. Why has the nominal budget dropped by 15% over 15 years? The answer is online publication.

(If the overall costs of running this academic journal seem almost ludicrously low for those of you who deal with much larger budgets in academic or business settings, remember that we don’t pay authors anything at all. )

Back in 2007, the JEP still printed about 20,000 copies of each issue–which had dropped off from about 25,000 a few years earlier. The costs of paper, printing, and postage were almost half the total cost of the journal. Now we print less than 4,000 copies of each issue, which are mostly mailed to libraries. The cost savings from printing fewer copies are dramatic. Of course, other costs do rise over time. But over the last 15 years, the rise in other costs like salaries and web maintenance has not offset the drop in printing and mailing costs.

However, since 2011 the Journal of Economic Perspectives has been freely available online. The publisher, the American Economic Association, now publishes nine journals, which are bundled together as one subscription for members and libraries. The AEA recognized that few economists were becoming members of the AEA and few libraries were subscribing to AEA publications just to get the Journal of Economic Perspectives. Thus, making the journal freely available did not involve a revenue loss. All the archives back to the first issue in 1987 are freely available, too.

About 2.5 million individual JEP articles are downloaded annually from the American Economic Association website. It’s also possible to download an entire issue: about 70,000 entire issues were downloaded last year. These totals don’t count downloads of JEP articles from other sources: for example, JEP articles were downloaded about 700,000 times from JSTOR last year, and many JEP article are available by being posted on personal websites or as part of reading lists, too.

It’s not obvious how to calculate a gain in productivity for the JEP over the last 15 years. The quantity of labor going into the journal is roughly the same. In physical form, the journal looks much the same: that is, roughly the same number of pages and articles each year. But with free online distribution, it seems clear that the JEP is vastly more available to the universe of possible readers around the world than it was 15 years ago, at a total cost that is substantially lower. This kind of productivity gain will not be reflected in official productivity statistics, but similar changes are happening in many ways, not just academic journals.

(The total publication expenses for the Journal of Economic Perspectives, along with an overview of the budget for the American Economic Association as a whole, are published each year in the “Report of the Treasurer” which appears in the annual (for example, May 2022, “Report of the Treasurer,” pp. 650-654).

Designing Tax Incentives for Charitable Giving

There is a long-standing if low-simmering controversy about whether tax breaks for charitable giving should even exist. The standard argument for a tax break is that when people give money to an officially recognized nonprofit, rather than consuming it themselves, they are in part contributing to the broader good–and so some tax break is appropriate. The counterargument is that when you donate, you are making a choice about your own income and preferences. If a Harvard or Princeton or Stanford alum decides to make big-money donations to their already-wealthy university, is that really worth a tax break? If a wealthy family donates to an arts organization that just happens to employ a couple of their family members as “associate director of marketing” or a similar title, is that worth a tax break?

Historically, the tax break for charitable contributions has been in the form of a tax deduction: for those not up-to-speed on the US tax system, this means that your donation to charity can be deducted from the income on which you owe taxes. However, the US tax system has long been constructed so that it doesn’t pay for most people to “itemize” their deductions, and these tax deductions end up applying mostly to the well-off. In addition, since those with higher incomes face higher marginal tax rates, their tax benefit from charitable giving is higher too. With these issues in mind, the history of the tax deduction for charitable giving is mostly about very high-income families.

Before the 2017 tax law, about one-third of federal income tax returns found it worthwhile to itemize deductions. But one change in that law was that the “standard deduction” everyone receives without itemizing went up, and as a result, now only about 10% of tax returns now itemize. Thus, a tax break like the deduction for charitable contributions now applies to one-tenth of the tax returns. C. Eugene Steuerle thinks through some of the issues in “Options for Improving the Lives of Charitable Beneficiaries Through Reform of the Charitable Deduction,” which was given as testimony for the Senate Committee on Finance in hearings on “Examining Charitable Giving and Trends in the Nonprofit Sector” (March 17, 2022).

Steuerle cites estimates that “changes to the tax law in 2017 reduced the federal government’s total individual income tax subsidies for charitable giving by about 30 percent—from an average subsidy of about 21 cents to 15 cents per dollar contributed.” Moreover, “the current charitable tax incentive now benefits only about one-tenth of all households, mainly those with higher incomes. I doubt seriously that the public will long support a deduction so narrowly applied.”

What might be done? One option, of course, would be to see this as an opportunity to simplify the tax code by getting rid of this provision–which, remember, is benefitting only 10% of taxpayers who tend to have the very highest incomes. Steuerle goes the other way, and thinks instead about how to rebuild the tax incentives.

As he points out, Congress went with a poorly designed idea for encouraging tax deductions in 2020: a provision under which every tax unit (itemizing deductions or not) could deduct $300 from their income for charitable giving. The IRS had no realistic way to check whether these small-scale contributions had actually happened. Moreover, the cap of $300 didn’t offer any incentive for people to give more than that. Instead, non-itemizers who were already giving small amounts (and who knew about this provision) could just get some money off on their taxes. By Steuerle’s estimates, this provision cost the US government $1.5 billion, while increasing charitable giving by $100 million.

To design a policy with better tradeoffs between government revenue and incentives for giving, the key is to limit the tax break for smaller amounts of charitable giving–much of which would have happened whether the tax break existed or not–but still provide an incentive for expanding one’s giving. Thus, Steuerle discusses a charitable giving tax deduction that would be available to all, but it would only apply to those who give, say, more than 1-2% of their income to charity. Also, remember that, say, 2% of income is a lot more in absolute dollars for a high-income tax payer than for the average person, so this requirement helps to limit the share of the tax break flowing to those with very income levels.

For example, his research finds that converting the current charitable tax deduction that applies only to the high-income 10% of taxpayers to a universal deduction for all that applies only to charitable deductions above 1.9% of income would (under the law in place after 2017 but before the many revisions to tax law during COVID-19) leave overall government tax revenue unchanged–but would increase charitable giving by about $2.5 billion: “With so many taxpayers ineligible today for a charitable deduction, Congress would still be significantly expanding the number of people who get a deduction.”

Steuerle offers a few other ideas, based on behavioral economics, about changes that might increase charitable contributions. For example, the current rule is that when you are filling out your taxes in early 2022, you can only count charitable contributions made in 2021. But what if you could make a decision while filling out your 2022 taxes to give some money to charity and see the immediate effect on the taxes you owe? Of course, you could only take such a tax deduction in a single year.

Another idea involves lottery winners–or others who receive a major one-time income windfall. Perhaps those people should be able to donate immediately a share of their winnings without any limit?

One could also eliminate the tax break, but instead have the government provide a “match” for charitable contributions: registered charitable organizations would then report to the government what they had collected in donations, and the government would cut them a check for an additional percentage amount. The idea here is that if the government wants to subsidize charitable giving, maybe it doesn’t need to happen through the tax code.

Interview with Joel Mokyr: Past and Future Growth

Allison Schrager has a 50-minute interview with economic historian Joel Mokyr on her “Risk Talking” podcast (May 17, 2022). It’s full of lively thoughts, and includes a transcript. Here are a few that caught my eye.

Kranzberg’s Law

“Technological progress is neither good nor bad, nor is it neutral.” This is known as Kranzberg’s law. It was Melvin Kranzberg who said that, and people keep citing that, although nobody quite knows what he meant.

Economic progress and the small minority

Economic growth and economic progress is not driven by the masses. It is not driven by the population at large. It is driven by a small minority of people who economists refer to in their funny language as upper-tail people, meaning if you think of the world following some kind of bell-shaped or normal distribution, it’s the elite, it’s the people who are educated—not necessarily intellectuals. They could be engineers, they could be mechanics, they could be applied mathematicians. …

[I]f you look at the top 2 percent or 3 percent of the population anywhere, those are the people that are driving economic growth. And that’s still the case. I mean, in the United States, much of the technological progress they’ve been experiencing has been driven by a fairly small number of people. Some of them are Caltech geeks, and some of them are just really good people who are coming up with novel ideas, but basically that’s what it is about. …

And it’s not just about the steam engine or the mule or anything like that, it’s about ideas that try to manipulate nature in a way that benefits humans. And so, I’ll give you one example—it’s not machinery, but it is very critical. It’s vaccination against smallpox, which is very much on people’s minds these days, right? But this is an 18th-century idea. This English country doctor, Edward Jenner, basically came up with this idea. It’s not a machine in any way, but it is a pathbreaking, I would say a radical idea, of how to use what we know about nature to improve human life. And that’s what economic growth in the end is all about. Now, it’s not all human life, it’s material things. It’s how not to get sick, how to get more to eat, how to have better clothing, better housing, to heat your place, to be warmer in the winter and cooler in the summer. It’s about all these things that define our material comfort and our material wellbeing.

Underestimated Productivity Gains in an Information Age

[T]he real problem is that most of the important contributions to economic welfare are often seriously, seriously, seriously underestimated in our procedures. And I believe that they are getting more and more underestimated. If the degree of underestimation is more or less constant, then you don’t care because over time if it isn’t changing over time, you can still see what the trend looks like. But I think that’s not right. I think we are more and more underestimated because the knowledge economy and the digital economy are famously subject to underestimation. …

I mean, just look at the enormous gain in human welfare that we have achieved because we were able to come up with vaccines against corona. Now, it’s not a net addition to GDP because before that we didn’t have corona, but think about the subtraction we would’ve had if it wasn’t for that. And so, I remain a technological optimist, but I’m also very much aware that measures that measure technological progress in a system that was designed for an economy that produced wheat and steel aren’t appropriate for an economy that produces high-tech things that are produced by a knowledge economy. …

Library Technology

[T]hings have changed dramatically in how I do my work. … I used to go to the library four days a week and I would spend 20 to 25 hours a week at least in the shelves, pulling out books, pulling out articles, pulling out journals. I barely go to the library anymore. I mean, [why should I? Everything in bloody hell is on my screen here.

Every article I want to look at—even things that aren’t even published—I Google them. Even books published 200 years ago appear on my screen as PDFs. Many of them searchable. I’d be crazy to go to the library. And so the way I do my research, the way I write my own stuff has changed dramatically. My best research assistant is sitting right here on my desk, and it’s my laptop. I mean, that is an amazing thing. Somebody would’ve told me this when I was a graduate student, I would’ve laughed him out of the room. Of course you need to go to the library. When I was a graduate student, I lived in the library. I was there 12 hours a day. I mean, these things have changed dramatically. The way you go to the dentist, the way you go to the doctors, you go to the hospital, you do your shopping—everything has changed.

This last point rings especially true for me, in my work as an editor. I used to have to spend at least half-a-day each week in the library, tracking down articles so that I would have a better understanding of the papers I was editing. Now, pretty much all of those articles–and older books, as well–are easily available through my access to an online college library.

But what Mokyr doesn’t emphasize here is that this change in work patterns is not necessarily an overall savings of time. Because it has become so much easier to look things up, I’m much more likely to do so. My guess is that more than 100% of the time savings from not going to the library and pulling volumes off the shelves now goes into time spent checking and doing background reading on a wider array of articles and data. My productivity is higher, but so is the workload that I impose on myself.

Talking Econometrics

So let’s say for the sake of argument that you’re already interested in econometrics–which is to say how quantitative research in economics is actually done. Then all I really need to say to you is that Isaiah Andrews has a set of four interviews with Joshua Angrist and Guido Imbens, ranging in length from 10-20 minutes each over at Marginal Revolution University.

If you don’t know the names, here’s a brief introduction. Angrist and Imbens, along with David Card, shared the most recent Nobel prize in economics. The prize specified that Angrist and Imbens were honored “for their methodological contributions to the analysis of causal relationships.”

For a sense of how Angrist thinks about these causality issues, here are a couple of useful starting points in the Journal of Economic Perspectives (where I work as Managing Editor):

For a couple of examples of work by Guido Imbens in JEP, see:

The guy who is interviewing Angrist and Imbens, Isaiah Andrews, won the John Bates Clark medal in 2021, which is awarded each year “to that American economist under the age of forty who is judged to have made the most significant contribution to economic thought and knowledge,” for “contributions to econometric theory and empirical practice.” An overview of his work in the Winter 2022 issue of JEP is here.

These really are conversations between Andrews, Angrist, and Imbens, not lectures. No powerpoint presentations or equations are used! No proofs are involved. It’s just a chance to hear these folks talk about their field and more broadly how they see economics.

Why Didn’t the US Adopt the Metric System Long Ago?

Why hasn’t the United States adopted the metric system for widespread use? I’ve generally thought there were two reasons. One is that with the enormous US internal market, there was less incentive to follow international measurement standards. The other was that the US has long had a brash and rebellious streak, a “you’re not the boss of me” vibe, which means that there will inevitably be pushback against some external measurement system invented by a French guy and run an international committee based in a Paris suburb.

However, Stephen Mihm makes a persuasive case that my internal monologue about the metric system is wrong, or at least seriously incomplete, in “Inching toward Modernity: Industrial Standards and the Fate of the Metric System in the United States” (Business History Review, Spring 2022, pp. 47-76, needs a library subscription to access). Mihm focuses on the early battles over US adoption of the metric system, waged in the 19th and early 20th century. He makes the case that the metric system was in fact blocked by university-trained engineers and management, with the support of big manufacturing firms.

The metric system is part of US policy discussions in the early 1800s, after it is adopted in France. Mihm writes:

By the 1810s, most commentators considered the metric system a failed experiment. One writer in 1813 noted that the French government, despite wielding considerably more power over their own populace than the United States could, nonetheless failed to secure adoption of the metric units. “The new measures . . . are on the counter . . . but the transactions are regulated by the old.” In 1819, a House of Representatives committee studying the issue concurred in this assessment, pointing to France’s failure to secure widespread adoption of the metric system.

Throughout the 19th century, there was an ongoing discussion about appropriate systems of weights and measures, and the metric system was part of those discussions. But the battle-lines for this dispute began to be clarified in 1860s. The growth of industrialization across the United States meant that there was also a movement across US industries and engineers to standardize measurements in areas like screw threads, nuts and bolts, sheet metal, wire, and pipe–so that it was possible for a manufacturing firm to use inputs from a variety of different suppliers around the country. confident that the parts would fit together. A similar movement arose in the railroad industry, standardizing axles, couplings, valves, and other elements so that rolling stock would fit together. This movement was led by a mixture of mechanical engineers and management experts, and it was based on the inch as the standard unit of measure. At least at this time, it’s fair to say that most people cared much less about measures of weight or volume.

But a number of scientists and social reformers preferred the logical organization of the metric system. Mihm reports that in 1863, “the newly created National Academy of Sciences recommended that the United States adopt the metric system. That same year, the United States participated in international congresses on postage and statistics that endorsed the metric system for both scientific and commercial purposes.” Federal legislation passed in 1866 to legalize the use of the metric system.

The struggle over how the US measuring system would be standardized then evolves from the late 1800s up through the early 20th century. Mihm lays out the details. For example, in 1873 the prominent educator and president of Columbia University, Frederick Barnard, founded the the American Metrological Society to push for the metric system. In 1874, a American Railway Master Mechanics’ Association instead pushed for the already-developed system of standardization based on inches. He said: “While French savants were laboring to build up this decimal system of interchangeable measures … the better class of American mechanics were solving the problem of making machinery with interchangeable parts.”

The dispute got a little weird at times. Mihm tells of the International Institute for Preserving and Protecting Weights and Measures, founded in 1879, which promoted “Great Pyramid metrology,” defined as “a belief that the Egyptians had inscribed the inch as a sacred unit of measurement in the design of their famed structures. … Over the 1870s and 1880s, pyramid metrology channeled much of the opposition to the metric system in the United States.” Lest this seem a little whacky to us, remember that this is a time when scientists and engineers were also exploring mesmerism and divining rods. To put it another way, being a logically rigorous scientist or engineer in one area does not rule out more imaginative approaches to other topics, then or now.

The central practical issue became what economists call “path-dependency.” Imagine two different paths for standardization. Perhaps in the abstract one is preferable. But if you have already committed to the other path, and all your machine tools and existing equipment are based on that alternative path, and all your workers and suppliers and customers are using that other path–then the costs of transition to the other approach are formidable. Indeed, the longer you wait to make the switch, the more committed you are to the path you are on. For example, if you have laid down pipelines for water and oil measured in an inch-based system, as well as set up train tracks and rolling stock based on that system, then you are going to have physical equipment for an inch-based system around for decades.

The metric issue kept bubbling along. “In 1896, the House of Representatives considered a bill that mandated the immediate, exclusive use of the metric system in the federal government, with the rest of the country to follow suit a few years later.” It almost passed, with relatively little attention, but was concern that the risk of disrupting industrial production in an election year wasn’t a political winner. When the US Bureau of Standards was created in 1901, the administrators preferred the metric system, but engineers and big companies pushed back hard.

By the early 20th century, remember that this argument had now been going on for decades. US industry had already felt firmly committed to an inch-based system of measurement back in the 1860s and 1870s; by the early 1900s, the idea of redoing all of their capital stock in the metric system seemed crazy to them. Indeed, US manufacturing was so dominant in the world at this time that US companies routinely exported inch-based equipment to companies in countries that were nominally on the metric system already. Some US manufacturers even argued that the unique inch-based measurement system helped to protect them from foreign competitors.

Bills for the metric system kept coming up in Congress in the early 1900s, and being shot down. Mihm writes:

In 1916, these efforts culminated in the creation of a new anti-metric organization known as the American Institute of Weights and Measures. … Much of its success can be attributed to a sophisticated public relations campaign. It placed advertisements and editorials in industry journals; successfully lobbied hundreds of trade associations, chambers of commerce, and technical societies to go on the record condemning mandatory use of the metric system; and obsessively monitored legislation on the local, state, and national levels. When the group identified a bill that endorsed mandatory metric conversion—or merely contained clauses that opened the door to greater reliance on the metric system—it mobilized hundreds of industrialists, engineers, and managers to defeat the legislation with letters, testimony, and editorials. By the 1920s, its membership rolls included many of the most important firms in the nation as well as presidents of the National Association of Manufacturers, the Association of American Steel Manufacturers, the American Railroad Association, and other national organizations. These organizations had a stake in standardization, actively joining government-sponsored efforts to bring further uniformity to the nation’s economy over the course of the 1920s. As inch-based standards governing everything from automobile tires to pads of paper became the norm, the prospects for going metric became ever more remote. Only in scattered pockets of the business community—the electrical field, for example, and pharmaceuticals—did the metric system become dominant.

We have now reached an odd point in the US experience where two measurement systems co-exist: the inch-based traditional system, along with pint and gallons, ounces and pounds, is how most Americana talk, most of the time, in ordinary life, but the metric system is how all science and most business operates (with the exception of the building trades). Many Americans step back and forth between the two systems of measurement every day in their personal and work lives, barely noticing.

Some readers will be interested to know that this issue of Business History Review has othre paper about standardization. Here’s the list of papers. The introductory essay by Yates and Murphy is open access:

Robert E. Lucas on Monetary Neutrality: A 50th Anniversary

Back in 1972, Robert E. Lucas (Nobel ’95) published a paper called “Expectations and the neutrality of money,” in the Journal of Economic Theory (4:2, 103–124). The paper was already standard on macroeconomics reading lists when I started graduate school in 1982, and I suspect it’s still there. For the 50th anniversary, the Journal of Economic Methodology (22:1, 2022) has published a six-paper symposium on Lucas’s work and the 1972 paper in particular.

Reading the heavily mathematical 1972 paper isn’t easy, and summarizing it isn’t easy, either. But at some substantial risk of oversimplifying, it addresses a big question. Why does policy by a central bank like the Federal Reserve affect the real economy? In the long-run, there is a widely-held belief (backed by a solid if not indisputable array of evidence) that in the long-run, money is a “veil” over real economic activity: that is, money facilitates economic transactions, but at over time it is preferences and technologies, working through forces of supply and demand, that determine real economic outcomes. To put it another way, changes in money will alter the overall price level over long-term time horizons, but it is “neutral” to real economic outcomes.

However, when the Federal Reserve or other central banks conduct monetary policy, it clearly does have an effect on the real economy. When a central bank lowers interest rates and makes credit available in a recession, the length of the recession seems diminished. Today, the concern is that if the central bank raises interest rates, it may cause an economic slowdown or recession. Apparently money is not just a veil over real activity, at least not in the short-run. But why not?

One possible answer here is that people are bad at anticipating the future. Thus, when the Fed stimulates the economy in the short-run, people don’t recognize that this stimulus might lead to inflation. When the Fed was spurring the economy during the pandemic recession in 2020 and into 2021, relatively few people were anticipating higher inflation. But for economists, the theory that monetary policy depends on people and markets being perpetually bad at understanding what’s going on feels like, at best, a partial answer.

Thus, in the 1972 paper, Lucas tried a different approach. He wanted to construct an example–that is, a model–of an economy where all the agents are fully rational. For economists, “rational” doesn’t mean you are always correct. It just means that you take advantage of all available information in making decisions–and as a consequence, you won’t make the same mistake over and over again. Thus, the central bank can’t “fool” these rational agents by juicing up the economy with low interest rates.

However, that phrase “all available information” opens up a possibility. What if economic agents do not and cannot have full information about what’s happening in the economy? In particular, say that when a number of prices rise, it’s hard for an economic agent to know if this is a “real” change because of forces of supply and demand or if it’s a “monetary” change of generally higher price levels. In summer and fall 2021, for example, it was hard to tell whether the higher prices were a “real” result of supply chains fracturing, or a “monetary” result of an overstimulated economy and a generalized inflation.

At the end of the 1972 paper, Lucas writes:

These rational agents are thus placed in a setting in which the information conveyed to traders by market prices is inadequate to permit them to distinguish real from monetary disturbances. In this setting, monetary fluctuations lead to real output movements in the same direction. In order for this resolution to carry any conviction, it has been necessary to adopt a framework simple enough to permit a precise specification of the information available to each trader at each point in time, and to facilitate verification of the rationality of each reader’s behavior. To obtain this simplicity, most of the interesting features of the observed business cycle have been abstracted from, with one notable exception: the Phillips curve emerges not as an unexplained empirical fact, but as a central feature of the solution to a general equilibrium system.

This Lucas paper is heavy on mathematics and will be a tough read for the uninitiated. It has come to be called an “island economy,” although the word “island” doesn’t actually appear in the 1972 paper, because in a mathematical sense the economic actors have a hard time distinguishing between what is on their own “island” and the broader economy. Information is spread out. It’s hard to perceive accurately what’s happening.

The emphasis on what information people have, and how they form their expectations about inflation–when filtered through decades of research since 1972–has had a substantial effect on real-world economic policy is conducted. It meant that monetary policy had a greater emphasis on expectations of inflation and how those expectations are formed. At present, for example, a key question for the Federal Reserve is the extent to which expectations of a higher inflation rate are becoming embedded throughout the economy in price-setting, wage-setting, and interest rates–because entrenched inflationary expectations would pose a different policy problem. The emphasis in monetary policy about rules that will be followed over time (like at target inflation rate of 2%) or “forward guidance” about how monetary policy will evolve in the future are both focused on addressing people’s expectations about future inflation. Keeping central banks reasonably independent from the political process can also be viewed as a way of reassuring people that even if politicians with a short-run focus control federal spending and borrowing, the central bank will be allowed to follow its own course–which again influences the expectations that people have about future inflation.

I should add that the Lucas (1972) paper became just one in a vast literature exploring the reasons why it might be hard for people to distinguish between changes in prices that arise from supply and demand and the changes that are part of an overall inflation. For example, some prices might be preset for certain time by past contracts, and when you know that some prices cannot adjust for a certain time, but others can, figuring out the real and monetary distinctions again become tricky.

In the Journal of Economic Methodology symposium, my guess is that a number of economist-readers may be most interested in the personal essays by Thomas J. Sargent (Nobel ’11) on “Learning from Lucas” and Harald Uhlig on “The lasting influence of Robert E. Lucas on Chicago economics,” both of which are fully of descriptions of how Lucas influenced the intellectual journey of the authors.

I found particular interest in the first essay in the symposium, “Lucas’s way to his monetary theory of large-scale fluctuations,” by Peter Galbács. The focus here is not on the legacy of Lucas’s work but on the earlier research leading up to it. Some of this ground is also covered in Lucas’s Nobel lecture on “Monetary Neutrality.” Galbács writes in the conclusion:

The way Lucas arrived at his monetary island-model framework was thus a step-by-step process starting in the earliest stage of his career. The first step was the choice-theoretic analysis of firm behaviour. At this stage, Lucas’s focus was on the firm’s investment decision through which he distinguished short-run and long-run reactions of the firm and the industry. The climax of this period is his Adjustment costs and the theory of supply (Lucas, 1966/1967a) that contained the basic supply and-demand framework that Lucas and Rapping (1968/1969a; 1968/1969b; 1970/1972a) shortly extended to labour market modelling – so Lucas’s work with Rapping is rooted in his earlier record in firm microeconomics. As they assumed, the household decides on short-run labour
supply on the basis of a given set of price and wage expectations, while it adjusts to long-run changes with a firm-like investment decision that implies the revision of expectations.

After this second step taken in labour market modelling, the third stage realizing his Expectations and the neutrality of money (Lucas, 1970/1972a) directly followed – although the complexity of influences renders the connection with the previous phase subtle (Lucas, 2001, p. 21). His monetary island model was a ‘spin-off’ from his work with Rapping (Lucas’s letter to Edmund S. Phelps, November 7, 1969. Box 1A, Folder ‘1969’), but the paper may be more appropriately regarded as a spin-off from the related impressions Lucas received. First of all, he needed the very island-model framework. It is Phelps (1970, pp. 6–9) who called his attention to the option of reformulating the decision problem by scattering the agents over isolated markets, while it is Cass who led Lucas to a correct mathematical exposition. However, it is Prescott who in their collaboration prepared Lucas for this exposition; and it is also Prescott who, teamed up with Lucas, provided the paradigmatic example of applying the Muthian rational expectations hypothesis in a stochastic setting with which Lucas (1966/1981b) had formerly dealt only in the less interesting non-stochastic case. As the present paper argued, Lucas’s monetary island model is thus the unification of the impressions Lucas gained under his graduate
studies at the University of Chicago, and later from Rapping, Phelps, Cass and Prescott under his years at Carnegie Mellon University

Here’s the full Table of Contents for the symposium, which requires a subscription to access: