Including Illegal Activities in GDP: Drugs, Prostitution, Gambling

The current international standards for how a country should compute its GDP suggest that illegal activities should be included. Just how to do this, given the obvious problems in collecting statistics on illegal activity, isn\’t clear. The US Bureau of Economic Analysis does not include estimates of illegal activities in GDP. However, there is ongoing research on the subject, described by Rachel Soloveichik in \”Including Illegal Market Activity in the U.S. National Economic Accounts\” (Survey of Current Business, February 2021).

It\’s perhaps worth noting up front that crime itself is not included in GDP. If someone steals from me, there is an involuntary and illegal redistribution, but GDP measures what is produced. Both public and private expenditures related to discouraging or punishing crime are already included in GDP. This is of course one of the many reasons why GDP should not be treated as a measure of social welfare: that is, social welfare would clearly be improved if crime was lower and money spent on discouraging and punishing crime could instead flow to something that provides positive pleasures and benefits. 
Thus, adding illegal activities to GDP requires adding the actual production of goods and services which are illegal. Soloveichik focuses on \”three categories of illegal activity: drugs, prostitution, and gambling.\” 

These three categories are not equal in their recent economic impact. Consumer spending on illegal drugs was $153 billion in 2017, compared to $4 billion on illegal prostitution and $11 billion on illegal gambling in the same year. Furthermore, tracking illegal drugs raises the average real GDP growth rate between 2010 and2017 by 0.05 percentage point per year and raises the average private-sector productivity growth rate between 2010 and 2016 by 0.11 percentage point per year. In contrast, neither tracking illegal prostitution nor tracking illegal gambling has much influence on recent growth rates.

To me, the most interesting part of the essay is about some historical patterns of spending on illegal activities and drug prices. For example, here\’s a figure showing spending on illegal drugs over time. The line to the far right shows spending on alcohol during prohibition. The very high level of spending in the 1980s is especially striking, remembering that you need to add the different categories of illegal drugs to get the total. 

Soloveichik writes: 

Chart 1 shows that the expenditure shares for all three broad categories of illegal drugs grew rapidly after 1965 and peaked around 1980. In total, this analysis calculates that illegal drugs accounted for more than 5 percent of total personal consumption expenditures in 1980. This high expenditure share is consistent with contemporaneous news articles and may explain why BEA chose to study the underground economy in the early 1980s (Carson 1984a, 1984b). Chart 1 also shows that illegal alcohol during Prohibition accounted for almost as large a share of consumer spending as illegal drugs in 1980 and changed faster. Measured nominal growth in 1934, the first year after Prohibition ended, is badly overestimated when illegal alcohol is excluded from consumer spending.

Here\’s a similar graph for total spending on illegal prostitution and gambling services. Spending on gambling was especially high up until about the 1960s, when first legal state lotteries and then casinos arrived. 

It may seem counterintuitive that the US can be suffering through an opioid epidemic in the last couple of decades, but still have what looks like relatively low spending on illegal drugs. But remember that the start of the opioid epidemic up to about 2010 largely involved legally sold prescription drugs (as discussed here and here)–which would have been included in GDP. Total spending is a combination of quantity purchased and price. In addition, price must be adjusted for quality. Thus, what the data shows is that we are living in a time of cheap and powerful heroin and fentanyl. As Soloveichik writes: 

Opioid potency has rapidly increased due to the recent practice of mixing fentanyl, an extremely powerful opioid, with heroin. Marijuana potency has gradually increased due to new plant varieties that contain higher concentrations of the main psychoactivechemical in marijuana, tetrahydrocannabinol (THC).

With those patterns taken into account, here\’s a figure showing estimated drug prices over time, relative to the prices for legal consumption goods. Drug prices for opioids and stimulants fell sharply in the 1980s, which makes the rise in nominal expenditures on drugs shown above even more striking, and have more-or-less stayed at the lower level since then. 

Soloveichik write: \”Readers should also note that illegal drugs are a large enough spending category to influence aggregate inflation. Between 1980 and 1990, average personal consumption expenditures price growth falls by 0.7 percentage point per year when illegal activity is tracked in the NIPAs.\”
If you are interested in data sources for these illegal goods and services and what assumptions are needed to estimate prices and output levels, this article is good place to start. 

The Dependence of US Higher Education on International Students

US higher education in recent decades had beeome ever-more dependent on rising inflows of international students–a pattern that was already in likely to slow down and now is being dramatically interrupted by the pandemic. John Bound, Breno Braga, Gaurav Khanna, and Sarah Turner describe these shifts in \”The Globalization of Postsecondary Education: The Role of International Students in the US Higher Education System\” (Journal of Economic Perspectives, Winter 2021, 35:1, 163-84). They write: 

For the United States, which has a large number of colleges and universities and a disproportionate share of the most highly ranked colleges and universities in the world, total enrollment of foreign students more than tripled between 1980 and 2017, from 305,000 to over one million students in 2017 (National Center for Enrollment Statistics 2018). This rising population of students from abroad has made higher education a major export sector of the US economy, generating $44 billion in export revenue in 2019, with educational exports being about as big as the total exports of soybeans, corn, and textile supplies combined (Bureau of Economic Analysis 2020).

Here\’s a figure showing the rise in international students from 2000-2017. Notice in particular the sharp rise in international students in master\’s degree students. 

Bound and co-authors write: 

[F]oreign students studying at the undergraduate level are most numerous at research-intensive public universities (about 32 percent of all bachelor’s degrees), though they also enroll in substantial numbers at non-doctorate and less selective private and public institutions. …  The concentration of international students  in master’s programs in the fields of science, technology, engineering, and mathematics is even more remarkable: for example, in 2017 foreign students received about 62 percent of all master’s degrees in computer science and 55 percent in engineering. … Many large research institutions now draw as much as 20 percent of their tuition revenue from foreign students (Larmer 2019).\”

This table shows destinations of international students from China, India, and South Korea, three of the major nations for sending students to the US. 

However, Bound and co-authors note that the US lead as a higher education destination has been diminishing: \”Although the United States remains the largest destination country for students from these countries, the US higher education system is no longer as dominant as it was 20 years ago. As an illustration, student flows from China to the United States were more than 10 times larger than the flows to Australia and Canada in 2000; by 2017, those ratios fell to 2.5 to 1 and 3.3 to 1, respectively.\”

This pattern of rising international enrolments in US higher ed was not likely to continue on its pre-pandemic trajectory. Other countries have been building up their higher education options. In addition, if you were a young entrepreneur or professional from China or India, the possibilities for building your career in your home country look a lot better now than they did, say, back in about 1990. But the pandemic has taken what would have been a slower-motion squeeze on international students coming to US higher education and turned it into an immediate bite. Bound and co-authors write: 

Visas for the academic year are usually granted between March (when admissions decisions are made) and September (when semesters begin). Between 2017 and 2019, about 290,000 visas were granted each year over these seven months (United States Department of State 2020). Between March and September 2020, only 37,680 visas were granted—an extraordinary drop of 87 percent. Visas for students from China dropped from about 90,000 down to only 943 visas between March and September 2020. A fall 2020 survey of 700 higher education institutions found that one in five international students were studying online from abroad in response to the COVID-19 pandemic. Overall, new international enrollment (including those online) decreased by 43 percent, with at least 40,000 students deferring enrollment (Baer and Martel 2020).

Overall, it seems to me an excellent thing for the US higher education system and the US economy to attract talent from all over the world. But even if you are uncertain about those benefits, it is an arithmetic fact that the sharp declines in international students are going to be a severe blow to the finances of US higher education. 

The Minimum Wage Controversy

Why has the economic research of the last few decades had a hard time getting a firm handle on the the effects of minimum wages? The most recent issue of the Journal of Economic Perspectives (where I have worked as managing editor for many years) includes a set of four papers that bear on the subject.  The short answer is that the effects of a higher minimum wage are likely to vary by time and place, and are likely to include many effects other than reduced employment. In this post, I\’ll offer a longer elaboration. For reference, the four JEP papers are:

Manning starts his paper by pointing out that mainstream views on the minimum wage have shifted substantially in the last 30 years or so. He writes: 

Thirty years ago, … there was a strong academic consensus that the minimum wage caused job losses and was not well-targeted on those it set out to help, and that as a result, it was dominated by other policies to help the working poor like the Earned Income Tax Credit. ,,,[P]olicymakers seemed to be paying attention to the economic consensus of the time: for example, in 1988 the US federal minimum wage had not been raised for almost a decade and only 10 states had higher minima. Minimum wages seemed to be withering away in other countries too. … In 1994, the OECD published its view on desirable labor market policies in a prominent Jobs Study report, recommending that countries “reassess the role of statutory minimum wages as an instrument to achieve redistributive goals and switch to more direct instruments” (OECD 1994).

The landscape looks very different today.  …In the United States, the current logjam in Congress means no change in the federal minimum wage is immediately likely. However, 29 states plus Washington, DC have a higher minimum wage. A number of cities are also going their own way, passing legislation to raise the minimum wage to levels (in relation to average earnings) not seen for more than a generation … Outside the United States, countries are introducing minimum wages (for example, Hong Kong in 2011 and Germany in 2015) or raising them (for example, the introduction of the United Kingdom’s National Living Wage in 2016, a higher minimum wage for those over the age of 25). Professional advice to policymakers has changed too. A joint report from the IMF, World Bank, OECD, and ILO in 2012 wrote “a statutory minimum wage set at an appropriate level may raise labour force participation at the margin, without adversely affecting demand, thus having a net positive impact especially for workers weakly attached to the labour market” (ILO 2012). The IMF (2014) recommended to the United States that “given its current low level (compared both to US history and international standards), the minimum wage should be increased.” The updated OECD (2018) Job Strategy report recommended that “minimum wages can help ensure that work is rewarding for everyone” (p. 9) and that “when minimum wages are moderate and well designed, adverse employment effects can be avoided” (p 72).

Why the change? From a US point of view, one reason is surely that the real inflation-adjusted level of the minimum wage peaked back in 1968. Thus, it makes some intuitive sense that studies looking at labor market data from the 1960s and 1970s would tend to find big effects of a higher minimum wage, but as the real value of the federal minimum wage declined over time, they would tend to find smaller values. Here\’s a figure from the Fishback and Seltzer paper showing the real (solid yellow) and nominal (blue dashed) value of the minimum wage over time: 

Another long-recognized problem in trying to get evidence about effects of the minimum wage based on changes over time is that lots of other factors affect the labor market, too. For example, the dashed blue line shows that that most recent jump in the federal minimum wage was phased in from 2007 to 2009. Trying to disentangle the effects of that rise in the minimum wage from the effects of the Great Recession is likely a hopeless task. 

One more problem with studying the effects of minimum wage changes over time is that who actually receives the minimum wage has been shifting. Manning offers this table. If shows, for example, that teenagers used to account for 32.5% of the total hours of minimum wage workers in 1979, but now account for only 9.6% of the hours of minimum wage workers. 

Rather than trying to dig out lessons from changes in the gradually declining real minimum wage over time, lots of research in the last few decades has instead tried to look at US states or cities where the minimum wage increased over time. Then the study either does a before-and-after comparison of trends, or looks for a comparison location where the minimum wage didn\’t rise. 

But this kind of analysis is subject to the basic problem that the states or cities that choose to raise their minimum wages are not randomly selected. They are usually places where average wages and wages for low-skill workers are already higher. As an extreme example, the minimum wage in various cities near the heart of Silicon Valley (Palo Alto, San Francisco, Berkeley, Santa Clara, Mountain View, Sunnyvale, Los Altos) is already above $15/hour. But in general, wages are also much higher in those areas. Asking the question of whether these higher minimum wages reduced low-skill or low-wage employment in these cities is an interesting research topic, but no sensible person would extrapolate the answers from Silicon Valley to how a $15/hour minimum wage would affect employment in, say, Mississippi, where half of all workers in the state earn less that $15/hour. 

Many additional complexities arise. Clemens goes through many of the possibilities in his paper. Here are some of them. 

1) Economists commonly divided workers into those in the \”tradeable\” or the \”nontradeable\” sector. A \”nontradeable\” good would be working at a coffee shop, where you compete against other coffee shops in the same immediate area, but not against coffee shops in other states or countries. A \”tradeable\” good might be a manufacturing job where your output is shipped to other locations, and so you do compete directly against producers from other locations. 

If you work in a tradeable-sector job and the state-level or local-level minimum wage rises, it may cause real problems for the firm, which is competing against outsiders. But many low-skilled jobs are in the \”nontradeable\” sector: food, hotels, and others. In those situations, a rise in the minimum wage means means higher costs for all the local competing firms–in which case it will be easier to past those costs along to consumers in the form of higher prices. Of course, if an employer can pass along the  higher minimum wage to consumers, any employment effects may be muted. 

2) An employer faced with a higher minimum wage might try to offset this change by paying lower benefits (vacation, overtime, health insurance, life insurance, and so on.)  The employer might also try to get more output from workers by, for example, offering less job flexibility or pushing them harder in the workplace. 

3) A higher minimum wage means an increased incentive for employers and worker to break the law and to evade the minimum wage. Clemens cites one \”analysis of recent minimum wage changes [which] estimates that noncompliance has averaged roughly 14 to 21 cents per $1 of realized wage gain.\”

4) An employer faced with a higher minimum wage might, for a time, not have many immediate options for adjustment. But over a timeframe of a year or two, the employer might start figuring out ways to substitute high-paid labor for the now pricier minimum wage labor, or to look for ways of automating or outsourcing minimum wage jobs. Any study that focuses on effects of a minimum wage during a relatively small time window will miss these effects. But any study that tries to look at long-run effects of minimum wage changes will find that many other factors are also changing in the long run, so sorting out just the effect of the minimum wage will be tough. 

5) A higher minimum wage doesn\’t just affect employers, but also affects workers. A higher wage means that workers are likely to work harder and less likely to quit. Thus, a firm that is required to pay a higher minimum wage might recoup a portion of that money from lower costs of worker turnover and training. 

There is ongoing research on all of these points. There is some evidence backing them, and some evidence not, but the evidence again often varies by place, time, occupation, and which comparison group is used. The markets for supply and demand of labor are complicated places. 

I don\’t mean to be a whiner about it, but figuring out the effects of a higher minimum wage from the existing evidence is a difficult question.  But of course, no one weeps for the analytical problems of economists. Most people just want a bottom line on whether a $15/hour minimum wage is good or bad, so that they know whether to treat you as friend or foe–depending on whether you agree with their own predetermined beliefs. I\’m not a fan of playing that game, but here are a few thoughts on the overall controversy. 

  • It\’s worth remember the old adage that \”absence of evidence is not evidence of absence.\” That is, just because it\’s hard to provide ironclad statistical proof that a minimum wage reduces employment doesn\’t prove that the effect is zero–it just means that getting strong evidence is hard. 
  • Since the federal minimum wage was enacted in the 1930s, it has always been a situation where a number of states have set a higher minimum wage. The recent shift is toward cities setting a higher minimum wage than the state. Thus, the effects of raising the federal minimum wage to $15/hour will not (mostly) be felt in places where the minimum wage is already at or near that level: instead, it will be felt in all the other locations. 
  • Many minimum wage workers are also part-time workers. Thus, it\’s easy to imagine an example, where, say, the minimum wage rises 20% but for a certain person their hours worked are cut by 10%. This is a situation where the minimum wage led to fewer hours worked, but the worker still has higher annual income.  
  • To the extent that a higher minimum wage does affect the demand for low-skilled labor, such effects will be less perceptible in a strong or growing economy when employment is generally expanding for other reasons, and more perceptible in a weak or recessionary economy, when fewer firms are looking to hire. 
  • Everyone agrees that a smaller rise in the minimum wage will have smaller effects, and a larger rise in the minimum wage will have larger effects. I know a number of liberal-leaning, Democratic-voting economists who are just fine with the tradeoffs of raising the federal minimum wage to some extent, but who also think that a rise to $15/hour for the national minimum wage (as opposed to the minimum wage in high-wage cities and states) is too much. 

True gluttons for punishment who have read this far may want some recent minimum wage studies to look at. In this case at least, your wish is my command: 

\”Wages, Minimum Wages, and Price Pass-Through: The Case of McDonald’s Restaurants,\” by Orley Ashenfelter and Štěpán Jurajda (Princeton University Industrial Relations Section, Working Paper #646, January 2021). \”We find no association between the adoption of labor-saving touch screen ordering technology and minimum wage hikes. Our data imply that McDonald’s restaurants pass through the higher costs of minimum wage increases in the form of higher prices of the Big Mac sandwich.\”

\”Myth or Measurement: What Does the New Minimum Wage Research Say about Minimum Wages and Job Loss in the United States?\” by David Neumark and Peter Shirley (National Bureau of Economic Research,  Working Paper 28388,  January 2021). \”We explore the question of what conclusions can be drawn from the literature, focusing on the evidence using subnational minimum wage variation within the United States that has dominated the research landscape since the early 1990s. To accomplish this, we assembled the entire set of published studies in this literature and identified the core estimates that support the conclusions from each study, in most cases relying on responses from the researchers who wrote these papers.Our key conclusions are: (i) there is a clear preponderance of negative estimates in the literature; (ii) this evidence is stronger for teens and young adults as well as the less-educated; (iii) the evidence from studies of directly-affected workers points even more strongly to negative employment effects; and (iv) the evidence from studies of low-wage industries is less one-sided.\”

\”Seeing Beyond the Trees: Using Machine Learning to Estimate the Impact of Minimum Wages on Labor Market Outcomes,\” by Doruk Cengiz, Arindrajit Dube, Attila S. Lindner and David Zentler-Munro (National Bureau of Economic Research Working Paper 28399, January 2021). \”We apply modern machine learning tools to construct demographically-based treatment groups capturing around 75% of all minimum wage workers—a major improvement over the literature which has focused on fairly narrow subgroups where the policy has a large bite (e.g., teens). By exploiting 172 prominent minimum wages between 1979 and 2019 we find that there is a very clear increase in average wages of workers in these groups following a minimum wage increase, while there is little evidence of employment loss. Furthermore, we find no indication that minimum wage has a negative effect on the unemployment rate, on the labor force participation, or on the labor market transitions.

\”The Budgetary Effects of the Raise the Wage Act of 2021,\” Congressional Budget Office (February 2021). \”CBO projects that, on net, the Raise the Wage Act of 2021 would reduce employment by increasing amounts over the 2021–2025 period. In 2025, when the minimum wage reached $15 per hour, employment would be reduced by 1.4 million workers (or 0.9 percent), according to CBO’s average estimate. In 2021, most workers who would not have a job because of the higher minimum wage would still be looking for work and hence be categorized as unemployed; by 2025, however, half of the 1.4 million people who would be jobless because of the bill would have dropped out of the labor force, CBO estimates. Young, less educated people would account for a disproportionate share of those reductions in employment.\”

Rural Poverty

Rural poverty is often overlooked. In the Spring 2021 issue of the Stanford Social Innovation ReviewRobert Atkins, Sarah Allred and Daniel Hart discuss \”Philanthropy’s Rural Blind Spot,\” about how philanthropies have typically put much more time and attention on urban poverty than rural poverty. They write: 

Most large foundations are located in metropolitan areas and have built relationships with institutions and organizations in those communities. … [M]any grant makers assume that urban centers have higher rates of poverty than rural areas. Moreover, many funders believe that they maximize impact and do more good when their grants go to addressing distress in densely populated areas. The rates of poverty, however, are higher in rural areas than in urban areas. In addition, it would be difficult to demonstrate that a grant going to a metropolitan community to improve high school graduation rates, increase the food security of agricultural workers, or reduce childhood lead poisoning assists a greater number of individuals than if the same grant goes to a nonmetropolitan community. In other words, giving to more densely populated areas does not clearly result in a greater equity return on investment for the grant maker.

The authors point to a resource with which I had not been familiar, the Multidimensional Index of Deep Disadvantage produced by H. Luke Shaefer, Silvia Robles and Jasmine Simington of the University of Michigan, using methods also developed by Kathryn Edin and Tim Nelson at Princeton University. They collect a combination of economic, health, and social mobility data on counties and the 500 largest cities in the United States. You can find an interactive map at the website, or click here for a full list of the 3617 areas. They then rank the areas. In an overview of the results, Shaefer, Edin, and Nelson write:

When we turn the lens of disadvantage from the individual to the community, we find that five geographic clusters of deep disadvantage come into view: The Mississippi Delta, The Cotton Belt, Appalachia, the Texas/Mexico border, and a small cluster of rust belt cities (most notably Flint, Detroit, Gary, and Cleveland). Many Native Nations also score high on our index though are not clustered for historic reasons. …

The communities ranking highest on our index are overwhelmingly rural. Among the 100 most deeply disadvantaged places in the United States according to our index, only 9 are among the 500 largest cities in the United States, which includes those with populations as small as 42,000 residents. In contrast, 19 are rural counties in Mississippi. Many of the rural communities among the top 100 places have only rarely, if ever, been researched. Conversely, Chicago, which has been studied by myriad poverty scholars, doesn’t even appear among the top 300 in our index. Our poverty policies suffer when social science research misses so many of the places with the greatest need. …

How deep is the disadvantage in these places? When we compare the 100 most disadvantaged places in the United States to the 100 most advantaged, we find that the poverty rate and deep poverty are both higher by greater than a factor of six. Life expectancy is shorter by a full 10 years, and the incidence of low infant birthweight is double. In fact, average life expectancy in America’s most disadvantaged places, as identified by our index, is roughly comparable to what is seen in places such as Bangladesh, North Korea, and Mongolia, and infant birth weight outcomes are similar to those in Congo, Uganda, and Botswana.

If should be noted that a list of this sort is not an apples-to-apples comparison, in part because the population sizes of the areas are so very different. Many counties have only a few thousand people, while many cities have hundreds of thousands, or more. Thus, the data for a city will average out both better-off and worse off areas, while a low population, high-poverty rural county may not have any better-off places. 

But the near-invisibility of rural poverty in our national discourse is still striking. For example, when talking about improving education and schooling, what should happen with isolated rural schools rarely makes the list.  When talking about how to assure that people have health insurance, the issues related to people who are a long way from a medical facility are often not on the list of topics. When talking about raising the national minimum wage to $15/hour, much of the discussion seems to assume an area relatively dense in population, employers, and jobs, where various job-related adjustments can take place, not a geographically isolated and high-poverty area with few or no major employers. These issues aren\’t new. Many of the current high-poverty areas (rural and urban) have been poor for decades.

Robert Shiller on Narrative Economics

Robert J. Shiller (Nobel \’13) delivered the Godley-Tobin Lectures, an annual lecture delivered at the Eastern Economic Association meetings, on the subject of “Animal spirits and viral popular narratives” (Review of Keynesian Economics, January 2021, 9:1, pp. 1-10).

Shiller has been thinking about the intertwining of economics and narrative at least since his presidential address to the American Economic Association back in 2017. He suggests, for example, that the key feature distinguishing humans may be our propensity to organize our thinking into stories, rather than just intelligence per se. Indeed, there are many examples in all walks of life (politics, investing, expectations of family life, careers, reactions to a pandemic) where people will often cleave to their preferred narrative rather than continually question and challenge it with their intelligence. He begins the current essay in this way: 

John Maynard Keynes\’s (1936) concept of ‘animal spirits’ or ‘spontaneous optimism’ as a major driving force in business fluctuations was motivated in part by his and his contemporaries\’ observations of human reactions to ambiguous situations where probabilities couldn\’t be quantified. We can add that in such ambiguous situations there is evidence that people let contagious popular narratives and the emotions they generate influence their economic decisions. These popular narratives are typically remote from factual bases, just contagious. Macroeconomic dynamic models must have a theory that is related to models of the transmission of disease in epidemiology. We need to take the contagion of narratives seriously in economic modeling if we are to improve our understanding of animal spirits and their impact on the economy.

Thus, this lecture emphasizes the parallels between how narratives spread and epidemiology models of how diseases spread:

Mathematical epidemiology has been studying disease phenomena for over a century, and its frameworks can provide an inspiration for improvement in our understanding of economic dynamics. People\’s states of mind change through time, because ideas can be contagious, so that they spread from person to person just as diseases do. …

We humans live our lives in a sea of epidemics all at different stages, including epidemics of diseases and epidemics of narratives, some of them growing at the moment, some peaking at the moment, others declining. New mutations of both the diseases and the narratives are constantly appearing and altering behavior. It is no wonder that changes in business conditions are so often surprising, for there is no one who is carefully monitoring the epidemic curves of all these drivers of the economy.

Since the advent of the internet age, the contagion rate of many narratives has increased, with the dominance of social media and with online news and chats. But the basic nature of epidemics has not changed. Even pure person-to-person word-of-mouth spread of epidemics was fast enough to spread important ideas, just as person-to person contagion was fast enough to spread diseases into wide swaths of population millennia ago.

As one illustration of the rise and fall of economic-related narratives, Shiller uses \”n-grams\” which search for how often certain terms are used in news media. Examples of such terms shown in this graph include \”supply-side economics,\” \”welfare dependency,\” \”welfare fraud,\” and \”hard-working American.\”

Shiller\’s theme is that if we want to understand macroeconomic fluctuations, it won\’t be enough just to look at patterns of interest rates, trade, or innovation, and it won\’t be enough to include factors like real-life pandemics, either. The underlying real factors matter, of course. But the real factors are often translated into narratives, and it is those narratives which then affect economic actions about buying, saving, working, starting a business, and so on. Shiller writes: \”As this research continues, there should come a time when there is enough definite knowledge of the waxing and waning of popular narratives that we will begin to see the effects on the aggregate economy more clearly.\”

I\’ll only add the comment that there can be a tendency to ascribe narratives only to one\’s opponents: that is, those with whom I disagree are driven by \”narratives,\” while those with whom I agree are of course pure at heart and driven only by facts and the best analysis. That inclination would be a misuse of Shiller\’s approach. In many aspects of life, enunciating the narratives that drive our own behavior (economic and otherwise) can be hard and discomfiting work. 

For some additional background on these topics: 

For a readable introduction to epidemiology models aimed at economists, a useful starting point is the two-paper \”Symposium on Economics and Epidemiology\” in the Fall 2020 issue of the Journal of Economic Perspectives: \”An Economist\’s Guide to Epidemiology Models of Infectious Disease,\” by Christopher Avery, William Bossert, Adam Clark, Glenn Ellison and Sara Fisher Ellison; and \”Epidemiology\’s Time of Need: COVID-19 Calls for Epidemic-Related Economics,\” by Eleanor J. Murray.

For those who would like to know more about \”animal spirits\” in economics, a 1991 article in the Journal of Economic Perspectives by Roger Koppl discusses the use of the term by John Maynard Keynes and then gives a taste of the intellectual history: for example, Keynes apparently got the term from Descartes, and it traces back to the second century Greek physician Galen.

The Allure and Curse of Mini-Assignments

During the online courses, it seems that many teachers and students have the feeling that they are working harder and accomplishing less. In its own way, this feeling is a tribute to the virtues of in-person education. Betsy Barre offers some hypotheses as to why higher education has a feeling of getting less output from more input in \”The Workforce Dilemma\” (January 22, 2021, Center for Teaching and Learning, Wake Forest University). 

I recommend the short essay as a whole. But the part that resonated most with me had to do discussed how attempts by teachers to use online tools as a way of encouraging and monitoring short-term academic progress can end up making everyone feel crazy. Barre writes:  

The most interesting of all six hypotheses, and the one I’ve thought the most about, is that our experience this semester has revealed an unfortunate truth about how teaching and learning took place prior to the pandemic. This theory … suggests that students are experiencing more work because of a fundamental difference between online courses and the typical in-person course. While there may be no difference in how much work is expected of students in these courses, there is often a difference in how much work is required.

Most faculty would agree that students should be spending 30 hours a week on homework in a traditional 15-credit semester, but we also know that the average student taking in-person courses is able to get by on about 15 hours a week. This is not surprising to most faculty, as we know that students aren’t always doing the reading or coming to class prepared. Here and there a course might require the full amount of work, but a student can usually count on some of their courses requiring less.

So what makes online courses so different? In an online course, faculty can see, and students are held accountable for, all expected work. In an in-person class, students can sometimes skip the reading and passively participate in class. But in an online course, they may have to annotate the reading, take a quiz, or contribute to a discussion board after the reading is complete. While this shift would be uncomfortable for students in the case of one course, shifting all of their courses in this direction would, in fact, double their workload and entail a radical reworking of their schedules. …

Mini-assignments are often well-meant. The idea is to keep students involved and on pace, and for certain kinds of classes and for many students I\’m sure it works fine. But a steady-stream of graded mini-assignments also take time, organization and energy for both faculty and students. Barre again:  

We’ve also encouraged faculty to follow best practices by breaking up a few large assignments into multiple smaller ones. When this happens across five courses, 10 assignments can suddenly convert to 50. While those 50 assignments may take no more time than the original 10, simply keeping track of when they are due is a new job unto itself. In each of these cases, the cognitive load we are placing on students has increased, adding invisible labor to the time they spend completing the work.

There are some workplaces where every keystroke on the computer can be monitored. Most teachers and students do not aspire to have the learning experience function in this way. But a continual stream of mini-assignments moves higher education closer to that model. 

Interview with Seema Jayachandran: Women and Development, Deforestation, and Other Topics

Douglas Clement and Anjali Nair have collaborated to produce a \”Seema Jayachandran interview: On deforestation, corruption, and the roots of gender inequality\” (Federal Reserve Bank of Minneapolis, February 12, 2021). Here are a couple of samples: 

The U-shaped relationship between economic development and women\’s labor force participation

There’s a famous U-shaped relationship in the data between economic development and female labor force participation. … Historically, in richer countries, you’ve seen this U-shape where, initially, there are a lot of women working when most jobs are on the family farm. Then as jobs move to factories, women draw out of the labor force. … But then there’s an uptick where women start to enter the labor market more and not just enter the labor market, but earn more money. There are several reasons why we think that will happen.

One is structural transformation, meaning the economy moves away from jobs that require physical strength like in agriculture or mining towards jobs that require using brains. … For example, the percentage of the economy in services is higher in the U.S. compared to Chad, and service jobs are going to advantage women. So that’s one reason that economic development helps women in the labor market.

The second reason is improvement in household production. Women do the lion’s share of household chores and, as nations develop, they adopt technology that reduces the necessary amount of labor. Chores like cooking and cleaning now use a lot more capital. We use machines like vacuum cleaners, washing machines, or electric stoves rather than having to go fetch wood and cook on a cookstove. This labor-saving technology frees up a lot of women’s time because those chores happen to be disproportionately women’s labor. Some of those technological advances are in infrastructure. Piped water, for instance, where we’re relying on the government or others to build that public good infrastructure. And some is within households; once piped water is available, households invest in a washing machine.

The third reason is fertility. When countries grow richer, women tend to have fewer kids and have the ability to space their fertility. For example, both the smaller family size and the ability to choose when you have children allows women to finish college before having children.  … Less on the radar is that childbearing has also gotten a lot safer over time. There’s some research on the U.S. by Stefania Albanesi and Claudia Olivetti suggesting that reduction in the complications from childbirth are important in thinking about the rise in female labor force participation.

Paying landowners in western Uganda to prevent deforestation

In many developing countries, people are clearing forests to grow some cassava or other crop to feed their family. Obviously, that’s really important to them. You wouldn’t want to ban them from doing that. They’d go hungry! But if we think about it in absolute terms and global terms, the income people are generating by clearing forests is small. If we can encourage them to protect the forest and compensate them for the lost income, then protecting the forest actually makes them better off than clearing it. And because the income they’re forgoing is small in global terms, that could cost a lot less than other ways of reducing carbon emissions. …

This is a truly interdisciplinary project. One of my collaborators is a specialist in remote sensing, which is analyzing satellite data to measure land use and forests. It’s similar to the machine learning that economists use often. But here we use high-resolution satellite imagery, where a single pixel covers 2.4 meters by 2.4 meters of surface area. 

If I showed you one of our images, you could spot every tree with your eye. Of course, there are 300 million pixels in the area we have imagery for, so you don’t want to go and hand-classify all those trees. But we have the algorithms and the techniques to classify all of those pixels into whether there’s a tree or not. We have this imagery for both the control villages where the program wasn’t in place and the treatment villages where it was, where landowners were paid to not cut their trees. So we could see before-and-after images of what happened in both control and treatment villages.

By doing that, we could see that in the control villages over this two-year period, 9 percent of the tree cover that existed at the beginning was gone. That’s a really rapid rate of deforestation. … By comparison, in the villages with this program, the rate of tree loss was cut in half, closer to 4 to 5 percent. There’s still tree loss—not everybody wanted to participate in the program—but the program made a pretty big dent in the problem.

Another thing the high-resolution imagery shows is the pattern of tree-cutting, and that showed that we’ve been underestimating the rate of deforestation in poor countries. On relatively low-resolution satellite imagery, we could see clear-cutting of acres and acres of land. That is an important problem. But recent estimates suggest that, especially in Africa, half of the deforestation is smaller landholders who are cutting four or five trees in a year to pay for a hospital bill, say. That adds up.

The Coin Shortage: Velocity Stories

In high school \”velocity\” referred to distance travelled divide by time. In economics, \”velocity\” refers to the speed with which money circulates. The formula is V= GDP/M: that is, take the size of the GDP for a year and take a measure of the money supply. Then velocity will tell you how many times that money circulated through the economy in a given year. 

During the pandemic, the velocity of money has slowed way down. One manifestation is the shortage of coins at many retailers. Tim Sablik tells the story in \”The COVID-19 pandemic disrupted the supply of many items, including cold hard cash\” (Econ Focus: Federal Reserve Bank of Richmond, Fourth Quarter 2020, pp. 26-29). One signal came from the coin laundries. Sablik writes: 

\”I started getting a few phone calls from members asking, \’Is it just me, or are more quarters walking out the door than before?\’\” says Brian Wallace, president of the Coin Laundry Association. Of the roughly 30,000 self-service laundromats in the United States, Wallace says that a little more than half take only quarters as payment to operate washers and dryers. Before the pandemic, some of these coin-operated businesses would take in more quarters each week than they gave out, meaning that most customers brought their own change to the laundromat rather than exchanging bills for quarters. But as the pandemic intensified, many of those business owners who had been used to ending the week with a surplus of quarters suddenly found they had a deficit. They turned to their local bank to purchase more, but the banks had no change to spare either.

In June 2020, the Federal Reserve started rationing the supply of coins. In an absolute sense, there didn\’t seem to be an overall shortage of coins. There are about $48 billion of coins in circulation, and that total didn\’t fall. Instead, with people paying more bills online and with debit or credit cards, the velocity of circulation for coins dropped, falling by about half. 

You may not have been aware, as I was not, that the Fed created a \”US Coin Task Force\” to get those coins moving again, nor that last October was \”\”Get Coin Moving Month.\” However, \”[o]ne aquarium in North Carolina shuttered by the pandemic put its employees to work hauling 100 gallons of coins from one of its water fixtures that had served as a wishing well for visitors since 2006.\”

Of course, the drop in velocity of money isn\’t just coins, but involves the money supply as a whole. The Federal Reserve offers several textbook definitions of the money supply, with differing levels of breadth. 

There are several standard measures of the money supply, including the monetary base, M1, and M2.

  • The monetary base: the sum of currency in circulation and reserve balances (deposits held by banks and other depository institutions in their accounts at the Federal Reserve).
  • M1: the sum of currency held by the public and transaction deposits at depository institutions (which are financial institutions that obtain their funds mainly through deposits from the public, such as commercial banks, savings and loan associations, savings banks, and credit unions).
  • M2: M1 plus savings deposits, small-denomination time deposits (those issued in amounts of less than $100,000), and retail money market mutual fund shares.

Here\’s one figure showing velocity of M1 over time, and another showing velocity of M2. In both figures, you can see that velocity has been on a downward path–although the path looks different depending on the measure of the money supply. You can also see the abrupt additional fall in velocity when the pandemic recession hit. 

There was a time, back in the 1970s, when the velocity of M1 looked fairly steady and predictable, climbing slowly over time. Thus, some monetary economists, most prominently Milton Friedman, argued that the Federal Reserve should just focus on having the money supply grow steadily over time to suit the needs of the economy. But when M1 velocity first flattened out and then started jumping around in the 1980s, it was clear that focusing on M1 was not a good policy target, and when M2 starting moving around in the 1990s, it didn\’t look like a suitable target either. At present, velocity is not especially interesting as a direct part of Fed policy, but continues to be interesting for what it tells us about how changes in how transactions and payments flow across the economy. 

What Gets Counted When Measuring US Tax Progressivity

The \”progressivity\”  of a tax refers to whether those with higher incomes pay a higher share of income in taxes than those with lower incomes. The federal income tax is progressive in this sense. 

However, other federal taxes like the payroll taxes that support Social Security are regressive, rather than progressive, because it applies only to income up to a limit (set at $142,800 in 2021). The justification is that Social Security taxes combine both a degree of redistribution but also a sense of contributing to one\’s own future Social Security benefits. But to take it one step further, one justification for the Earned Income Tax Credit for lower-earning families and individuals serves in part to offset the Social Security payroll taxes paid by this group. 

So with all this taken into account, how progressive is the federal income tax and how has the degree of progressivity shifted in recent decades. David Splinter tackles these questions in \”U.S. Tax Progressivity and Redistribution\” (National Tax Journal, December 2020, 73:4, 1005–1024).

It\’s worth emphasizing that any measure of tax progressivity is based on an underlying set of assumptions about what is counted as \”income\” or as \”taxes.\” Let me give some examples: 

The Earned Income Tax Credit is \”refundable.\” Traditional tax credits can reduce the taxes you owe down to zero, but a \”refundable\” credit means that you can qualify for a payment from the government above and beyond any taxes you owe: indeed, the purposes of this tax credit is to provide additional income and an additional incentive to work for low-income workers. This tax credit cost about $70 billion in 2019.   But for purposes of categorization, here\’s a question: Should these payments from the federal government to low-income individuals be treated as part of the progressivity of the tax code? Or should they be treated as a federal spending program? Of course, treating them as part of the tax code tends to make the tax code look more progressive. 

Here\’s another example: When workers pay taxes for Social Security and Medicare, employers also pay a matching amount. However, a body of research strongly suggests that the amount paid by employers leads to lower wages for workers; in effect, workers \”pay\” the employer share of the payroll tax in the form of lower take-home pay, even if employers sign the check. (After all, employers care about the total cost of hiring a worker. They don\’t care whether that money is paid directly to the worker or whether some of it must be paid to the government.) So when looking at the taxes workers pay, should the employer share of the payroll tax be included? 

Here\’s another example: Many of us have retirement accounts, where our employer signs the checks for the contributions to those accounts and the total in the account in invested in a way that provides a return over time. Do the employer contributions to retirement get counted as part of annual income? What about any returns earned over time? 

Or here\’s another return: Say that I own a successful business. There are a variety of ways I can benefit from owning the business other than the salary I receive: for example, the business might buy me a life insurance policy, or pay for a car or other travel expenses, or make donations to charities on my behalf. Are these counted into income? 

There are many questions like these, and as a result, measurements of average taxes paid for each income group will vary. Here\’s a selection of six recent estimates, as collected by Splinter: 

Again, these are just federal tax rates, not including state and local taxes. Notice both that the estimates vary, but also that they are broadly similar. 

One way to boil down the progressivity of the federal tax code into a single number is to use the Kakwani index. The diagram illustrates how it works. The horizontal axis is a cumulative measure of all individuals; the vertical axis is a cumulative measure of either income received or taxes paid by society as a whole. The dashed 45-degree line shows what complete equality would look like: that is, along that line, the bottom 20 percent of individuals get 20 percent of income and pay 20 percent of taxes, the bottom 40 percent get 40 percent of income and pay 40% of taxes, and so on. 

The idea is to compare the real-world distribution of income and taxes to this hypothetical line of perfect equality. The lighter solid line shows the distribution of income. Roughly speaking, the bottom 50% of individuals received about 20% of total income in 2016 The area from the lighter gray line to the 45-degree perfect equality line measures what is called the \”Gini index\”–a standard measure of the inequality of the income distribution. 

The dark line carries out a similar calculation for share of taxes paid. For example, the figure shows that the bottom 50% of the income distribution paid roughly 10% of total federal taxes in 2016. If the distribution of taxes paid exactly matched the distribution of income, the tax code would be proportional to income. Because the tax line falls below this income line, this shows that overall, federal taxes are progressive. The area between the income line and the tax line is the Kakwani index for measuring the amount of progressivity. 

How has the Kawkani index shifted over recent decades?  Splinter writes:

Between 1979 and 1986, the Kakwani index decreased 34 percent (from 0.14 to 0.10) … Between 1986 and 2016, the Kakwani index increased 120 percent (from 0.10 to 0.21) … For the entire period between 1979 and 2016, the Kakwani index increased 46 percent (from 0.14 to 0.21) …

In short, progressivity of federal taxes fell early in the Reagan administration, rose fairly steadily up to about 2009, and then was more-or-less flat through 2016. 

Splinter is using the Congressional Budget Office estimates of income and taxes in making these calculations. CBO estimates are mainstream, but of course that doesn\’t make them beyond question. In particular, a key assumption here is that payments made by the Earned Income Tax Credit are treated as part of tax system, rather than as a spending program, and that explains a lot of why the progressivity of the tax code increased by this measure. As Splinter writes: 

U.S. federal taxes have become more progressive since 1979, largely due to more generous tax credits for lower income individuals. Though top statutory rates fell substantially, this affected few taxpayers and was offset by decreased use of tax shelters, such that high-income average tax rates have been relatively stable. … Over the longer run, earlier decreases suggest a U-shaped tax progressivity curve since WWII, with the minimum occurring in 1986.

For more arguments and details about how to measure income and wealth, a useful starting point is a post from a couple of months ago on \”What Should be Included in Income Inequality?\” (December 23, 2020). 

Judges and Ideology

When judges are going through confirmation hearings, they tend make comments about how they will act as a neutral umpire, not taking sides and following the law. As one representative example, here\’s a comment from the statement of current US Supreme Court Chief Justice John Roberts when he was nominated back in 2005:

I have no agenda, but I do have a commitment. If I am confirmed, I will confront every case with an open mind. I will fully and fairly analyze the legal arguments that are presented. I will be open to the considered views of my colleagues on the bench, and I will decide every case based on the record, according to the rule of law, without fear or favor, to the best of my ability, and I will remember that it’s my job to call balls and strikes, and not to pitch or bat.

I will not here try to peer inside the minds of judges and determine the extent to which such statements are honest or cynical. But I will point out that there is strong evidence that many judicial decisions have a real ideological component, in the sense that it\’s easy to find judges who reach systematically different conclusions, even when they have both promised to follow the rule of law without fear or favor. The Winter 2021 issue of the Journal of Economic Perspectives includes two papers on this topic: 

Bonica and Sen lay out the many ways that social scientists have used to measure judicial ideology. I\’ll mention some of the approaches here. It will be immediately obviously that none of the approaches is bulletproof. But the key point to remember is that when one compares these rather different ways measuring judicial ideology, you get reasonably similar answers about which judges fall into which categories. 
 For example, the Supreme Court Database at Washington University in St. Louis classifies all Supreme Court decisions back to 1946 using various rules: 

As an example, the liberal position on criminal cases would be the one generally favoring the criminal defendant; in civil rights cases, the liberal position would be the one favoring the rights of minorities or women, while in due process cases, it would be the anti-government side. For economic activity cases—which make up a perhaps surprisingly large share of the Supreme Court’s docket—the liberal position will be the pro-union, anti-business, or pro-consumer stance. For cases involving the exercise of judicial power or issues of federalism, the liberal position would be the one aligned with the exercise of federal power, although this may depend on the specific issues involved. Finally, some decisions are categorized as “indeterminate,” such as a boundary dispute between states.

Another approach looks at the process by which a judge is appointed, which can include both the part of the president doing the appointing, while for federal judges appointed to district or appeals courts, one might also take into account the party of the US senators from that area. A more sophisticated version of this approach seeks to estimate the ideology of the president or the senators involved, thus recognizing that not all Republicans and Democrats are identical. Yet another approach categorized judges according to their political campaign contributions that they made before being appointed, or according to the contributions made by those that the judges choose to be law clerks. Another line of research looks at newspaper editorials about Supreme Court judges during their confirmation hearings, and how they match with other measures like the categories above. There is some recent work using text-based analysis to categorize the ideology of judges according to their use of certain terms. 

Yet another approach ignores the content of judicial decisions, and instead just looks at voting patterns. An approach called Martin-Quinn scores is based on the idea that the ideology of judges can be positioned along a line. As one extreme, if there were two groups of judges that always agreed with each other but always disagreed with the other group, they would be at extreme opposite ends of the line. If there were some other judges who voted 50:50 with one extreme group or the other, they would be in the middle of the line. Using these kinds of calculations and the voting records for each term, one can even see how a judge may evolve over time away from the extreme and toward the middle, or vice versa. 
Here\’s what the Martin-Quinn for Supreme Court judges look like going back to 1946. Blue lines are judges appointed by Democrats; red lines by Republicans. There is a different score for each judicial term, and the scores can thus evolve over time. The members of the Supreme Court as is stood before the appointment of Amy Coney Barrett last fall are labelled by name. Again, remember that these scores are not based on anyone making decisions about what is \”conservative\” or \”liberal,\” but only on similarities in actual voting patterns. 

It\’s perhaps not a surprise that the red and blue lines tend to be separated. But looking back a few decades, you can also see some overlap in the red and blue lines. That overlap has now gone away. 

While research on the Supreme Court gets the most attention, there is also ongoing work looking at lower-level federal courts as well as state and local court systems. For example, a body of academic research points out that some judges are known to be less likely to grant bail or more likely to imposed more severe sentences. Given that judges are often assigned to cases at random, when the judge is available, this means that there are cases where very similar defendants are treated quite differently by the courts–just based on which judge they randomly got. For justice, this is a bad outcome. For researchers, it can be a useful tool to figuring out whether those who randomly got bail, or got a shorter sentence, have different long-term outcomes in terms of recidivism or other life outcomes than those who randomly did not get bail or ended up with a longer sentence. 
At some basic level, it isn\’t shocking that judges are human and have ideological differences. Indeed, sports fans will know that even when talking about referees, there are some who are more likely to call penalties, or among baseball umpires, some who are more likely to call strikes.  Indeed, the reason we need judges is because laws and their application are not fixed and indisputable. One might even argue that it\’s good to have a distribution of judges with at least somewhat different views, because that\’s how the justice system evolves. 
That said, is there some kind of judicial reform that might reduce the role of judicial  ideology and/or turn down the temperature of judicial confirmation hearings? Hemel talk about a range of proposals, including ideas like a mandatory retirement age or fixed 18-year terms for Supreme Court judges. For various reasons, he\’s skeptical that the situation is as historically unique as is sometimes suggested, or that most of the proposals will make much difference. 
One of the interesting facts that Hemel points out along the way is that the US Supreme Court has experienced \”the fall of the short-term justice.\” He writes: \”Over the court’s history, 40 justices have served for ten years or less. None of these quick departures occurred in the last half-century. Several factors have contributed to the fall of the short-term justice. Fewer are dying young, and no justice since Fortas in 1969 has been forced to depart in disgrace. The justices also are now less likely to leave the court to pursue political careers in other branches. Contrast this with Charles Evans Hughes, who left the court in 1916 to accept the Republican nomination for president, and James Byrnes, who would serve as US Secretary of State and Governor of South Carolina in his post-judicial life.\” We have instead evolved to a situation where justices leave the court only because of infirmity or death. 

Hemel offers a different kind of reform that he characterizes as a \”thought experiment,\” which is at least useful for expanding the set of policy options. The idea is to break the rule that a judge can only be added to the court when another judge leaves. Hemel writes:

Decoupling could be implemented as follows. Each president would have the opportunity to appoint two justices at the beginning of each term, regardless of how many vacancies have occurred or will occur. Those justices would join the bench at the beginning of the next presidential term. For example, President Trump, upon taking office in January 2017, would have had the opportunity to make two appointments. Those appointees—if confirmed—would receive their commissions in January 2021. The retirement or death of a justice would have no effect on the number of appointments the sitting president could make. Justices would continue to serve for life. Decoupling thus shares some similarities with the norm among university faculties, where senior members enjoy life tenure but the departure of one does not automatically and immediately trigger the addition of a new member.

The decoupling proposal would result in an equal allocation of appointments across presidential terms, though that is not its principal advantage. It would create new opportunities for compromise when the White House and Senate are at daggers drawn: Because appointments would come in pairs, a Democratic president could resolve an impasse with a Republican Senate (or vice versa) by appointing one liberal and one conservative. It would significantly reduce the risk that a substantial number of justices would be subject to the loyalty effect, since no more than two justices would ever be appointees of the sitting president (and only in that president’s second term). The loyalty effect could be eliminated entirely by modifying the plan so that justices receive their commission only after the president who appointed them leaves office (that is, if Trump had been reelected in 2020, none of his appointees would join the court until January 2025).

The plan would likely have a modest effect on the size of the court. The mean tenure of justices who have left the court in the last half-century (since 1970) is 26.4 years, though one might expect tenure to be shorter if appointees had to wait four (or eight) years between confirmation and commission. If justices join the court at a slightly faster rate than they depart, the gradual growth in the court’s size would be tolerable. … A larger court would serve the objective sometimes cited by term-limit proponents of reducing the influence of any individual jurist’s idiosyncrasies over the shape of American law. It would also likely lessen the macabre obsession with the health of individual older justices.

Hemel also argues that these changes could be implemented via ordinary legislation. I don\’t have a well-developed opinion on this kind of proposal, but I had not heard the proposal before, and it seemed as worthy of consideration as some of the better-known ideas.