In Hume’s spirit, I will attempt to serve as an ambassador from my world of economics, and help in “finding topics of conversation fit for the entertainment of rational creatures.”
The Economist titled its article on European corporations: “The land that ambition forgot. Europe is now a corporate also-ran. Can it recover its footing?” (June 5, 2021). The article is well worth reading, but here are a couple of snapshots and my own reactions. Notice in particular that these changes are fairly recent. The horizontal axis in these graphs starts only two decades ago. The share of EU companies among the largest in the world has been declining: “In 2000 nearly a third of the combined value of the world’s 1,000 biggest listed firms was in Europe, and a quarter of their profits. In just 20 years those figures have fallen by almost half.”
Here’s the EU share of the global economy, and also the stock market capitalization of EU companies compared to global stock market capitalization. The message here is not just that both shares have declined substantially. Notice also that back around 2000 the EU share of the global economy and the share of the EU in global stock market capitalization were roughly the same, but that is no longer true.
Europe has often been a world leader in drawing up rules and regulations that companies must follow, in areas including digital privacy, environmental protection, use of genetic modification technologies, and so on. However, the EU countries have overall, judging by performance, have not been an especially friendly place to start or run a company, The European Union itself remains a fractured economic zone, separated by barriers set up by national governments, as well as by language and cultural differences. The Economist writes that big EU companies have in recent decades been preferring to expand their sales and operations overseas, rather than in their home base. Companies are social mechanisms both for organizing current and future production, and also for planning and investment needed for future innovations in production methods and new products. Europe has a smaller share of these engines of production.
The NAS report compares the US to 16 “peer countries,” which are other countries with a high level of per capita income and well-developed health care systems. (The 16 countries are Australia, Austria, Canada, Denmark, Finland, France, Germany, Italy, Japan, Norway, Portugal, Spain, Sweden, Switzerland, the Netherlands, and the United Kingdom.) The two panels below compare life expectancy going back to 1950 for females and males in the US (red line) and the average of the peer countries (blue line). The almost-invisible gray lines show each of the 16 peer countries separately.
For US women, life expectancy was slightly above that of the peer group in 1950, but starting around 1980 a divergence began. For US men, life expectancy was similar to the peer group but a divergence also began in the 1980s. For both US men and women, life expectancy seems to have flattened out in the last decade or so. The report also does a breakdown of the same data by racial/ethnic status. In this figure, the red line shows only white US females and males. The dotted line shows non-white Hispanics, and is available only for recent years, but it pretty much overlaps the peer group. The dashed line shows black Americans. There is still a life expectancy gap between white and black Americans, but the gap has generally been declining over time. The levelling out of US life expectancies for the 25-64 age group in the last few decades has been largely a phenomenon affecting white Americans.
Notice that this comparison is not about infant or child mortality rates, nor is it about life expectancy for the elderly. Indeed, life expectancies for US infants and for children under the age of 10, and for adults who have already reached their 80s are higher than for the peer group of countries. It’s the in-between age group where the difference arises. As one digs into these patterns more closely, here are some of the details that emerge:
The committee identified three categories of causes of death that were the predominant drivers of trends in working-age mortality over the period: (1) drug poisoning and alcohol-induced causes, a category that also includes mortality due to mental and behavioral disorders, most of which are drug- or alcohol-related; (2) suicide; and (3) cardiometabolic diseases. The first two of these categories comprise causes of death for which mortality increased, while the third encompasses some conditions (e.g., hypertensive disease) for which mortality increased and others (e.g., ischemic heart disease) for which the pace of declining mortality slowed. …
[I]ncreasing mortality among U.S. working-age adults is not new. The committee’s analyses confirmed that a long-term trend of stagnation and reversal of declining mortality rates that initially was limited to younger White women and men (aged 25–44) living outside of large central metropolitan areas (seen in women in the 1990s and men in the 2000s), subsequently spread to encompass most racial/ethnic groups and most geographic areas of the country. As a result, by the most recent period of the committee’s analysis (2012–2017), mortality rates were either flat or increasing among most working-age populations. Although this increase began among Whites, Blacks consistently experienced much higher mortality. …
Over the 1990–2017 period, disparities in mortality between large central metropolitan and less-populated areas widened (to the detriment of the latter), and geographic disparities became more pronounced. Mortality rates increased across several regions and states, particularly among younger working-age adults, and most glaringly in central Appalachia, New England, the central United States, and parts of the Southwest and Mountain West. Mortality increases among working-age (particularly younger) women were more widespread across the country, while increases among men were more geographically concentrated.
Regarding socioeconomic status, the committee’s literature review revealed that a large number of studies using different data sources, measures of socioeconomic status, and analytic methods have convincingly documented a substantial widening of disparities in mortality by socioeconomic status among U.S. working-age Whites, particularly women, since the 1990s. Although fewer studies have examined socioeconomic disparities in working-age mortality among non-White populations, those that have done so show a stable but persistent gap in mortality among Black adults that favors those of higher socioeconomic status.
Many of these factors overlap in various ways, and the subject as a whole is not well-understood. A substantial portion of the NAS report is a call for additional research. But if I had to extrapolate from the available data, one pattern that seems common is about parts of the US feeling separated and isolated, either by urban/nonurban status or by socioeconomic status. The specific causes of death contributing to the pattern seem to share the trait that they are potentially worsened by life and economic stress. Although the report is about long-term trends, not the pandemic, it does offer the insight that COVID-19 has added to the disparity of mortality rates for working-age adults. Yes, the elderly accounted for by far the largest share of COVID-19 deaths. But if one looks in percentage terms, the report notes:
Thus, COVID-19 has reinforced and exacerbated existing mortality disparities within the United States, as well as between the United States and its peer countries. The CDC reported that adults aged 25–44 experienced the largest percentage increases in excess deaths during the pandemic (as of October 2020).
What distinguishes “complexity economics”? W. Brian Arthur offers a short readable overview in “Foundations of complexity economics” (Nature Reviews Physics 3: 136–145, 2021). This is a personal essay, rather than a literature review. For example, Arthur explains how the modern research agenda for complexity economics emerged from work at the Santa Fe Institute in the late 1980s. How is complexity economics different from regular economics? Complexity economics sees the economy — or the parts of it that interest us — as not necessarily in equilibrium, its decision makers (or agents) as not superrational, the problems they face as not necessarily well-defined and the economy not as a perfectly humming machine but as an ever-changing ecology of beliefs, organizing principles and behaviours. How does a researcher do economics in this spirit? A common approach is to describe, in mathematical terms, a number of decision-making agents within a certain setting. The agents start off with a range of rules for how they will perceive the situation and how they will make decisions. The rules that any given agent uses can change over time: the agent might learn from experience, or might decide to copy another agent, or the decision-making rule might experience a random change. The researcher can then look at the path of decision-making and outcomes that emerge from this process–a path which will sometimes settle into a relatively stable outcome, but sometimes will not. Arthur writes: Complexity, the overall subject , as I see it is not a science, rather it is a movement within science … It studies how elements interacting in a system create overall patterns, and how these patterns, in turn, cause the elements to change or adapt in response. The elements might be cells in a cellular automaton, or cars in traffic, or biological cells in an immune system, and they may react to neighbouring cells’ states, or adjacent cars, or concentrations of B and T cells. Whichever the case, complexity asks how individual elements react to the current pattern they mutually create, and what patterns, in turn, result.As Arthur points out, an increasingly digitized world is likely to offer a number of demonstrations of complexity theory at work.
Now, under rapid digitization, the economy’s character is changing again and parts of it are becoming autonomous or self- governing. Financial trading systems, logistical systems and online services are already largely autonomous: they may have overall human supervision, but their moment-to-moment actions are automatic, with no central controller. Similarly, the electricity grid is becoming autonomous (loading in one region can automatically self- adjust in response to loading in neighbouring ones); air-traffic control systems are becoming autonomous and independent of human control; and future driverless-traffic systems, in which driverless-traffic flows respond to other driverless-traffic flows, will likely be autonomous. … Besides being autonomous, they are self- organizing, self- configuring, self-healing and self- correcting, so they show a form of artificial intelligence. One can think of these autonomous systems as miniature economies, highly interconnected and highly interactive, in which the agents are software elements ‘in conversation with’ and constantly reacting to the actions of other software elements.
To put it another way, if we want to understand when these kinds of systems are likely to work well, and how they might go off the rails or be gamed, complexity analysis is likely to offer some useful tools.
But what about using complexity theory for economics in particular? As Arthur writes: “A new theoretical framework in a science does not really prove itself unless it explains phenomena that the accepted framework cannot. Can complexity economics make this claim? I believe it can. Consider the Santa Fe artificial stock market model.” For example, there’s a long-standing issue of why stock markets see short-run patterns of boom and bust. Another puzzle of stock markets is why there is so much trading of stocks. Sure, stock traders will disagree about the underlying value of stocks and about the meaning of recent news which affects perceptions of future value. Such disagreements will lead to a modest volume stock trading, but it’s hard to see how they lead to the extremely high volumes of trading seen in modern markets. John Cochrane phrased this point nicely in a recent interview with Tyler Cowen:
Why is there this immense volume of trading? When was the last time you bought or sold a stock? You don’t do it every 20 milliseconds, do you? I’ll highlight this. If I get my list of the 10 great unsolved puzzles that I hope our grandchildren will have figured out, why does getting the information into asset prices require that the stock be turned over a hundred times? That’s clearly what’s going on. There’s this vast amount of trading, which is based on information or opinion and so forth. I hate to discount it at all just as human folly, but that’s clearly what’s going on, but we don’t have a good model.
Here is Arthur’s description of how complexity economics looks at these stock market puzzles: We set up an ‘artificial’ stock market inside the computer and our ‘investors’ were small, intelligent programs that could differ from one another. Rather than share a self- fulfilling forecasting method, they were required to somehow learn or discover forecasts that work. We allowed our investors to randomly generate their own individual forecasting methods, try out promising ones, discard methods that did not work and periodically generate new methods to replace them. They made bids or offers for a stock based on their currently most accurate methods and the stock price forms from these — ultimately, from our investors’ collective forecasts. We included an adjustable rate-of-exploration parameter to govern how often our artificial investors could explore new methods.
When we ran this computer experiment, we found two regimes, or phases. At low rates of investors trying out new forecasts, the market behaviour collapsed into the standard neoclassical equilibrium (in which forecasts converge to ones that yield price changes that, on average, validate those forecasts). Investors became alike and trading faded away. In this case, the neoclassical outcome holds, with a cloud of random variation around it. But if our investors try out new forecasting methods at a faster and more realistic rate, the system goes through a phase transition. The market develops a rich psychology of different beliefs that change and do not converge over time; a healthy volume of trade emerges; small price bubbles and temporary crashes appear; technical trading emerges; and random periods of volatile trading and quiescence emerge. Phenomena we see in real markets emerge. … I want to emphasize something here: such phenomena as random volatility, technical trading or bubbles and crashes are not ‘departures from rationality’. Outside of equilibrium, ‘rational’ behaviour is not well- defined. These phenomena are the result of economic agents discovering behaviour that works temporarily in situations caused by other agents discovering behaviour that works temporarily. This is neither rational nor irrational, it merely emerges. Other studies find similar regime transitions from equilibrium to complex behaviour in nonequilibrium models. It could be objected that the emergent phenomena we find are small in size: price outcomes in our artificial market diverge from the standard equilibrium outcomes by only 2% or 3%. But — and this is important — the interesting things in real markets happen not with equilibrium behaviour but with departures from equilibrium. In real markets, after all, that is where the money is made.In other words, the key to understanding dynamics of stock markets may reside in the idea that investors are continually exploring new methods of investing, which in turn leads to high volumes of trading and in some cased to dysfunctional outcomes. Of course, Arthur offers a variety of other examples, as well.
For those who would like more background on complexity economics, one starting point would be the footnotes in Arthur’s article. Another place to start is the essay by J. Barkley Rosser, “On the Complexities of Complex Economic Dynamics,” in the Fall 1999 issue of the Journal of Economic Perspectives (13:4, 169-192). The abstract reads:
Complex economic nonlinear dynamics endogenously do not converge to a point, a limit cycle, or an explosion. Their study developed out of earlier studies of cybernetic, catastrophic, and chaotic systems. Complexity analysis stresses interactions among dispersed agents without a global controller, tangled hierarchies, adaptive learning, evolution, and novelty, and out-of-equilibrium dynamics. Complexity methods include interacting particle systems, self-organized criticality, and evolutionary game theory, to simulate artificial stock markets and other phenomena. Theoretically, bounded rationality replaces rational expectations. Complexity theory influences empirical methods and restructures policy debates.
One approach to the goal of reducing carbon emissions is sometimes called “electrification of everything,” a phrase which is a shorthand for an agenda of using electricity from carbon-free sources–including solar and wind–to replace fossil fuels. The goal is to replace fossil fuels in all their current roles: not just in generating electricity directly, but also in their roles in transportation, heating/cooling of buildings, industrial uses, and so on. Even with the possibilities for energy conservation and recycling taken into account, the “electrification of everything” vision would require a very substantial increase in electricity production in the US and everywhere.
An energy system powered by clean energy technologies differs profoundly from one fuelled by traditional hydrocarbon resources. Building solar photovoltaic (PV) plants, wind farms and electric vehicles (EVs) generally requires more minerals than their fossil fuelbased counterparts. A typical electric car requires six times the mineral inputs of a conventional car, and an onshore wind plant requires nine times more mineral resources than a gas-fired power plant. Since 2010, the average amount of minerals needed for a new unit of power generation capacity has increased by 50% as the share of renewables has risen.
The types of mineral resources used vary by technology. Lithium, nickel, cobalt, manganese and graphite are crucial to battery performance, longevity and energy density. Rare earth elements are essential for permanent magnets that are vital for wind turbines and EV motors. Electricity networks need a huge amount of copper and aluminium, with copper being a cornerstone for all electricity-related technologies. The shift to a clean energy system is set to drive a huge increase in the requirements for these minerals, meaning that the energy sector is emerging as a major force in mineral markets. Until the mid-2010s, the energy sector represented a small part of total demand for most minerals. However, as energy transitions gather pace, clean energy technologies are becoming the fastest-growing segment of demand.
The IEA is careful to say that this rapid growth in demand for a number of minerals doesn’t negate the need to move to cleaner energy, and the report argues that the difficulties of increasing mineral supply are “manageable, but real.” But here is a summary list of some main concerns:
High geographical concentration of production: Production of many energy transition minerals is more concentrated than that of oil or natural gas. For lithium, cobalt and rare earth elements, the world’s top three producing nations control well over three-quarters of global output. In some cases, a single country is responsible for around half of worldwide production. The Democratic Republic of the Congo (DRC) and People’s Republic of China (China) were responsible for some 70% and 60% of global production of cobalt and rare earth elements respectively in 2019. …
Long project development lead times: Our analysis suggests that it has taken on average over 16 years to move mining projects from discovery to first production. …
Declining resource quality: … In recent years, ore quality has continued to fall across a range of commodities. For example, the average copper ore grade in Chile declined by 30% over the past 15 years. Extracting metal content from lower-grade ores requires more energy, exerting upward pressure on production costs, greenhouse gas emissions and waste volumes.
Growing scrutiny of environmental and social performance: Production and processing of mineral resources gives rise to a variety of environmental and social issues that, if poorly managed, can harm local communities and disrupt supply. …
Higher exposure to climate risks: Mining assets are exposed to growing climate risks. Copper and lithium are particularly vulnerable to water stress given their high water requirements. Over 50% of today’s lithium and copper production is concentrated in areas with high water stress levels. Several major producing regions such as Australia, China, and Africa are also subject to extreme heat or flooding, which pose greater challenges in ensuring reliable and sustainable supplies.
The policy agenda here is fairly clear-cut. Put research and development spending into ways of conserving on the use of mineral resources, and on ways of recycling them. Step up the hunt for new sources of key minerals now, and get started sooner than strictly necessary with the planning and permitting. And for supporters of clean energy in high-income countries like the United States, be aware that straitjacket restriction on mining in high-income countries is likely to push production into lower-income countries where any such restrictions may be considerably looser.
The idea of a “compensating differential” is conceptually straightforward. Imagine two jobs that require equivalent levels of skill. However, one job is unattractive in some way: physically exhausting, dangerous to one’s health, bad smells, overnight hours, and so on. The idea of a compensating differential is that if employers want to fill these less attractive jobs, they will need to pay workers more than those workers would have received in more-attractive jobs. The existence of compensating differentials comes up in a number of broader issues. For example: 1) If you believe in compensating differentials, you are likely to worry less about health and safety regulation of jobs–after all, you believe that workers are being financially compensated for health and safety risks. 2) When discussing gender wage gaps, an issue that often comes up is to compare pay in male-dominated and female-dominated occupations. An argument is sometimes made that male-dominated occupations tend to be more physically dangerous or risky (think construction or law enforcement) or involve distasteful tasks (say, garbage collection). One justification for the pay levels in these male-dominated jobs is that they are in part a compensating differential. 3) When thinking about regulatory actions, it’s common to compare the cost of the regulation to the benefits, which require estimating the “value of a statistical life.” Here’s one crisp explanation of the idea from Thomas J. Kniesner and W. Kip Viscusi:
Suppose further that … the typical worker in the labor market of interest, say manufacturing, needs to be paid $1,000 more per year to accept a job where there is one more death per 10,000 workers. This means that a group of 10,000 workers would collect $10,000,000 more as a group if one more member of their group were to be killed in the next year. Note that workers do not know who will be fatally injured but rather that there will be an additional (statistical) death among them. Economists call the $10,000,000 of additional wage payments by employers the value of a statistical life.
Notice that at the center of this calculation is the idea of a compensating differential: in this case, estimating that two jobs are essentially identical except for one with a higher risk of injury 4) It’s plausible that workers may sort themselves into jobs based on the preferences of those workers. Thus, workers who end up working outdoors or overnight, for example, may be more likely to have a preference for working outdoors or overnight. Those who work in riskier jobs may be people who place a lower value on such risks. It would seem unwise to assume that workers who end up in different jobs have the same personal preferences about job characteristics: my compensating differential for working in a risky job may be higher than the compensating differential for those who actually have such jobs. It’s also plausible that workers with lower income levels might be more willing to trade off higher-risk for somewhat higher income than workers with higher income levels. 5) The idea that high-risk jobs are paid a compensating differential makes the labor market into a kind of health-based lottery, with winners and losers. The compensating differential is based on average levels of risk, but not everyone will have the average outcome. Those who take high-risk jobs, get higher pay, and do not become injured are effectively the winners. Those who take high-risk jobs but do become injured, and in this way suffer a loss of lifetime earnings, are effectively the losers. 6) Precise knowledge about the overall safety of jobs is likely to be very unequally distributed between the employer, who has experience with outcomes of many different workers, and the employee, who does not have access to similar data. 7) If compensating differentials do not exist–that is, if workers in especially unattractive jobs are not compensated in some way–then it raises questions about how real-world wages are actually determined. If most workers of a given skill level have a range of comparable outside job options, and act as if they have a range of outside options, then one might expect that an employer could only attract a workers for a high-risk job by paying more. But if workers do not act as if they have comparable outside options, then their pay may not be closely linked to the riskiness or other conditions of their employment–and may not be closely linked to their productivity, either. As you might imagine, the empirical calculation of compensating differentials is a controversial business. Peter Dorman and Les Boden make the case that it’s hard to find persuasive evidence for compensating wage differentials for risky work in their essay “Risk without reward: The myth of wage compensation for hazardous work” (Economic Policy Institute, April 19, 2021). The authors focus on the issue of occupational health and safety. They write: Although workplaces are much less dangerous now than they were 100 years ago, more than 5,000 people died from work-related injuries in the U.S. in 2018. The U.S. Department of Labor’s Bureau of Labor Statistics (BLS) reports that about 3.5 million people sustained injuries at work in that year. However, studies have shown that the BLS substantially underestimates injury incidence, and that the actual number is most likely in the range of 5-10 million. The vast majority of occupational diseases, including cancer, lung diseases, and coronary heart disease, go unreported. A credible estimate, even before the Covid-19 pandemic, is that 26,000 to 72,000 people die annually from occupational diseases. …The United States stands poorly in international comparisons of work-related fatal injuryrates. The U.S. rate is 10% higher than that of its closest rival, Japan, and six times the rateof Great Britain. This difference cannot be explained by differences in industry mix: TheU.S. rate for construction is 20% higher, the manufacturing rate 50% higher, and thetransportation and storage rate 100% higher than that of the E.U.I will not try here to disentangle the detailed issues related to the research for estimating compensating wage differentials for risky jobs. Those who do such research are aware of the potential objections and seek to address them. They argue that although any individual studies are suspect, a developed body of research using different data and method produces believable results. On the other side, Dorman and Boden make the case that such findings should be viewed with a highly skeptical eye. They also point out that during the pandemic, it is far from obvious that the “essential” workers who continued in jobs that involved a higher risk to health received a boost in wages that reflected these risks. They write: The view of the labor market associated with the freedom-of-contract perspective, which holds that OSH risks are efficiently negotiated between workers and employers, is at odds with nearly everything we know about how labor markets really work. It cannot accommodate the reality of good and bad jobs, workplace authority based on the threat of dismissal, discrimination, and the pervasive role of public regulation in defining what employment entails and what obligations it imposes. It also fails to acknowledge the social and psychological dimensions of work, which are particularly important in understanding how people perceive and respond to risk.
Here are some basic facts as a starting point. The share of published economics research papers that is solo-authored is shown by the red dashed line, measured on the right-hand axis. The top figure shows all research journals; the bottom figure shows the “top five” highly prominent research journals. As you can see, almost all economic research was single-authored in 1950. Now, it’s down around 20%, The solid blue line shows the average number of authors per research paper. Back in 1950, it was around 1.1 or 1.2–say, out of every five papers, four were single-authored and the fifth one had two authors. Now, it’s up around 2.5 authors/paper–thus, papers with two or three authors are common, and more authors is not uncommon.
Jones slices and dices this data in various ways. For example, it turns out that there is also a steady trend toward papers with more authors being more heavily cited. Jones defines a “home run” paper as one that is among the top in number of citations received for papers published in that year.
Teams have a growing impact advantage. In addition, this growing advantage is stronger when one looks at higher thresholds of impact. From the 1950s through the 1970s, a team- authored paper was 1.5 to 1.7 times more likely to become a home-run than a solo-authored paper, with the modest variation depending on the impact threshold. By 2010, the home- run rate for team- authorship was at least 3.0 times larger than for solo-authorship. From the 1980s onward, the team- impact advantage is increasing as the impact threshold rises. By 2010, team-authored papers are 3.0 times more likely to reach the top 10 percent of citations, 3.3 times more likely to reach the top 5 percent of citations, and 4.1 times more likely to reach the top 1 percent of citations than solo-authored papers.
Moreover, the grow of team-based research and its greater success seems to be happening in every subfield of economics research. It’s also happening in other social sciences, and it already happened several decades ago in engineering and the hard sciences.
The great strength of team-based research is that in a world where knowledge has become vastly broader, teams are a way to deploy input from those with different specialties. These combinations of research insights from differing areas are also more likely to become the kinds of innovative papers that are widely cited in the future. Jones offers some striking comparisons here:
To put some empirical content around this conceptual perspective, consider that John Harvard’s collection of approximately 400 books was considered a leading collection of his time, and its bequest in 1638, along with small funds for buildings, helped earn him the naming right to Harvard College (Morrison 1936). One hundred seventy- five years later, Thomas Jefferson’s renowned library of 6,487 books formed the basis for the US Library of Congress. That library’s collection had risen to 55,000 books by 1851 (Cole 1996). Today, the US Library of Congress holds 39 million books (as described in https://www.loc.gov/about/general-information.Lookin)g instead at journal articles, the flow rate of new papers grows at 3–4 percent per year. In 2018, peer-reviewed, English-language journals published three million new papers (Johnson, Watkinson, and Mabe 2018). In total, the Web of Science™ now indexes 53 million articles from science journals and another 9 million articles from social science journals (as described at https://clarivate. com/webofsciencegroup/solutions/web-of-science). In economics alone, the Microsoft Academic Graph counts 30,100 economic journal articles published in the year 2000. This publication rate was twice what it was in 1982 and half what it is today. …
The organizational implication—teamwork—then follows naturally as a means to aggregate expert knowledge. In the history of aviation, for example, the Wright brothers designed, built, and flew the first heavier-than-air aircraft in 1903. This pair of individuals successfully embraced and advanced extant scientific and engineering knowledge. Today, by contrast, the design and manufacture of airplanes calls on a vast store of accumulated knowledge and engages large teams of specialists; today, 30 different engineering specialties are required to design and produce the aircraft’s jet engines alone.
Even if the growth in co-authorship is overall beneficial, perhaps even inevitable, it raises a number of concerns and issues. In no particular order:
1) Most of the time, it is individuals who get hired and promoted and given tenure–not teams. Thus, those who do the hiring and promotion and tenuring need to figure out how to attribute credit within a team. Who in the team is more or less deserving of career advancement?
2) The issue of assigning credit is not only difficult in itself, but raises a possibility of bias. There’s some evidence that women economists tend to receive less credit for co-authored work than male economists.
3) If teams are more important, then how teams are formed matters, which raises issues of its own. For an individual researcher, what is the best strategy for knowing when to join a team and when to back away? Research teams in economics are likely to be fluid and shifting from project to project. Academic and personal networks will influence who knows who, and what collaborations are more or less likely to arise, and who is likely to be left out.
4) Along with the teams of named co-authors, team-based work may also be supported by institutions. Colleges and universities with more resources for research assistants, data access, computing power, travel budgets, and sabbaticals will also be able to offer more support for teams. Across the universe of US colleges and universities, there has always been unequal access to such support, but the rising importance of teams could give greater bite to these inequalities.
5) The people who enter PhD programs and eventually become research economists are not typically trained to work as team members. They have passed a lot of exams. But communication and working together in teams may not have played much or a role in their earlier development. (Indeed, a cynic might say that people may have a tendency to become professors because they aren’t especially good at being team players.) There has not traditionally been formal training for research economists in managing even a small team, much less in larger management tasks like handling budgets or overseeing a human resources department: some researchers will find that they can build such skills on their own while others will fail, sometimes egregiously, and the members of their team will suffer as a result.
6) Research papers in economics have tripled in length in the last few decades. Although there are a number of reasons behind this shift, it seems plausible that groups of co-authors–all willing and able to add to the paper in their own way–are part of the underlying dynamic. As an editor at an academic journal myself, I have on occasion wondered if all the co-authors of a paper are taking full responsibility for everything in the paper, or alternatively, if each co-author is focused on their own material, and no author is really taking responsibility for a clear introduction, internal structure, and conclusions.
There’s an often-told story about why economies go through cycles of boom and bust that goes like this. In good economic times, there is lots of lending and borrowing. Indeed, this credit boom helps provide the force that keeps the good times going. But although few people focus on this fact during the good times, the credit boom involves a larger and larger share of somewhat risky loans–loans that are less and less likely to get paid off when a negative shock hits the economy and times turn bad. When that negative shock inevitably hits, the economy moves very rapidly from a credit boom situation, where it’s easy to borrow, and a credit bust situation, where it’s much harder. Lots of firms were counting on new loans to help pay off their own loans, and those new loans aren’t available. Again, the negative situation feeds on itself, with the sharp decline in credit and economic buying power helping to prolong the recession.
I want to emphasize two sets of stylized facts. The first, which has become increasingly well-known and widely accepted in recent years, is that if one looks at quantity data that captures the growth of aggregate credit, then at relatively low frequencies rapid growth in credit tends to portend adverse macroeconomic outcomes, be it a financial crisis or some kind of more modest slowdown in activity. Second, and perhaps less familiar, is that elevated credit-market sentiment also tends to carry negative information about future economic growth, above and beyond that impounded in credit-quantity variables. … One interpretation of this pattern is that when sentiment is high, there is an increased risk of disappointing over-optimistic investors. And when investors are disappointed, this tends to lead to get a sharp reversal in credit conditions that corresponds to an inward shift in credit supply, which in turn exerts a contractionary effect on economic activity. So again, the overall picture is that credit booms, especially those associated not just with rapid increases in the quantity of credit, but also with exuberant sentiment—i.e., aggressive pricing of credit risk—tend to end badly.
This process doesn’t operate on a schedule or like clockwork. An economy proceeding into a credit boom becomes increasingly vulnerable. But it typically takes some additional trigger for that vulnerability to turn into recession.
Stein discussed the various models economist have used to look at this process. For example, one set of models is built on the idea that actors in a growing economy can become irrationally exuberant, and start to neglect downside risks. In other models, the lenders and borrowers in a credit boom are strictly rational, but focus only on their own risks. They don’t take into account that their expansion of credit is contributing to greater economic vulnerability for the economy as a whole, these “externalities in leverage” mean that credit will grow faster than the socially optimal amount. Although researcher sweat blood over the differences between these approaches, there’s no reason they can’t both be true.
So how might a government that is aware of this history of credit cycles respond? In one approach, sometimes called “macroprudential regulation,” government would tighten and loosen financial regulations to counterbalance the risks of credit boom and bust. For example, banks could be required to hold more capital as credit levels rise in an economy. Many countries change their regulations about how easy it is to get a home mortgage. Stein writes that ‘”while a number of countries have implemented time-varying loan-to-value or debt-to-income requirements on home mortgage loans, we have not seen anything similar in the USA., and it does not appear that we are likely to anytime in the near future.” But this approach has limitations, too. As Stein points out, “as the USA is concerned, regulators appear to have little in the way of operational, time-varying macroprudential tools at their disposal.” In addition, adjusting macroprudential regulations might affect the actions of banks and homebuyers, but the financial system has all sorts of ways of expanding credit that will be much less affected by those kinds of regulations. Stein writes;
[I]t is useful to think about the rapid growth in recent years of the corporate bond market and the leveraged loan market. And bear in mind that some of this growth may be explained by lending to large and medium-sized firms migrating away from the banking sector as capital requirements there have gone up. Leveraged loan issuance in particular has been booming of late; these are loans that are typically structured and syndicated by banks but most often wind up on the balance sheet of other investors, be they collateralized loan obligations (CLOs), pension funds, insurance companies, or mutual funds
I’ve written in the last few years before the pandemic about expansions in corporate bond markets and in leveraged loan markets (for example, here, here, here and here). So what else might be done? Stein suggests that monetary policy might want to keep an eye on credit cycles, and perhaps lean into them a little. Thus, if the economy was doing well and the Fed was wondering about how soon and how much to raise interest rates, it might act a little more quickly if a credit boom seemed well-underway, but otherwise act more slowly. But as Stein points out, the traditional view of central banking has been that “monetary policy should focus on its traditional inflation-employment mandate and should leave matters of financial stability to regulatory tools.” In response to this traditional view, Stein writes:
To be clear, I think this view is almost certainly right in a world where financial regulation is highly effective. However, for the reasons outlined above, I am inclined to be more skeptical with respect to this premise … at least in the current US context. This is of course not to say that we should not make every possible effort to improve our regulatory apparatus so as to mitigate its existing weaknesses. But taking the world as it exists today, I am more pessimistic that we can expect financial regulation to satisfactorily address the booms and busts created by the credit cycle entirely on its own. This would seem to leave open the possibility of a role for monetary policy—albeit a second-best one— in attending to the credit cycle.
Imagine that an entrepreneur who is running a promising business wants to change over from being a privately-held company and to become a publicly-owned company–that is, to get an infusion of money in exchange for becoming accountable to shareholders. How might this be done?
There have traditionally been two main choices. One option is to have an “initial public offering”–that is, to create stock and sell it to the public. The other option is for the entrepreneur to sell the company to an established firm, thus becoming accountable to the shareholders of that firm. But in the last year or so, a new option has emerged called the SPAC, which stands for “special purpose acquisition company.”
The Knowledge@Wharton website recently published “Why SPACs Are Booming” (May 2, 2021), wh which is a short descriptive overview of a one-hour video presentation called “Understanding SPACs,” in which “Wharton finance professors Nikolai Roussanov and Itamar Drechsler explained how SPACs work and their pros and cons for investors. Another useful overview is a paper just called “SPACs,” by Minmo Gahng, Jay R. Ritter, and Donghang Zhang (working paper at the SSRN website, last revised March 2, 2021). Let’s run through the basic questions here: how does it work, how many are there, why do it, and should investors be worried.
Here’s a figure from the Wharton presentation showing the SPAC process.
The first step is to form a SPAC. This is sometimes called a “blank check” company. It is a publicly-listed company–that is, it raises money by having own initial public offering in which it sells shares to investors–but at the start the SPAC doesn’t own anything. The company does not have to identify in advance what it plans to do with its money. Presumably, investors buy stock in such a company based on the reputation of those who started it. As the Wharton write-up explains:
From the time a SPAC lists and raises money through an IPO, it has 18 to 24 months to find a private operating company to merge with. If a SPAC can’t find an acquisition target in the given time, it liquidates and returns the IPO proceeds to investors, who could be private equity funds or the general public.
When the SPAC finds a target company, it often seeks out some additional investors known as PIPEs, for “private investors in public equity.” If the SPAC fails to merge with the target firm, then investors get their money back. IF the SPAC does merges with the target firm, then the owners of the target firm get a payoff and that target firm now has a set of stockholders.
SPACs have taken off lately. Here’s a figure from the Gahng, Ritter, and Zhang paper, where the blue dots (left axis) show the number of SPACs and the gray bars (right axis) show the dollar value of the initial public offerings used to form these SPACs. As you can see, SPACs are not brand-new–they have been around for a few years–but the number and volume was gradually rising up through 2019 before taking off in 2020.
Why was the number of SPACs on the rise in 2020? One simple reason is that with stock prices high, more companies are trying to find ways to cash in. Another reasons involves the regulation of initial public offerings. Specifically, a firm going through an IPO is only allowed to describe its past historical performance, and is forbidden from making forecasts of future earnings. Obviously, this tends to favor somewhat established firms, and to rule out young start-up companies, especially those with little little history and little revenue. The time and energy and cost and regulatory requirements cancel out the benefits. A firm being purchased by a SPAC can make forecasts of future earnings, and the entire process can happen in a couple of months. Similarly, if you are an outside investor who would like to own a diversified portfolio of young start-up companies, hoping that a few of them will hit it big, investing in SPACs gives you the opportunity to do that without having inside connections to venture capitalists, angel investors, or private equity firms. For a firm thinking about being merged into a SPAC, one main disincentive is that the sponsor of the SPAC typically takes 20% of the value of the original firm as its reimbursement. This does give the sponsor of the SPAC a strong incentive to remain involved and to hellp shepherd the firm toward growth and profitability. But the target firm is in effect giving up 20% of the value of the firm in exchange for the cash infusion.
What are the potential problems with the SPAC approach? The obvious issue is that an investor in a SPAC is essentially trusting the SPAC to make a smart decision about which firm to merge with, and at what price, and additional trusting the SPAC management to keep pushing the firm forward after the merger is completed. When retail investors are looking at promises about what might happen with young firms, and displaying some perhaps irrational exuberance, the
A less obvious issue is how the IPO for a SPAC is structured for investors. The Wharton write-up explains:
Investors in the IPO of a SPAC typically buy what are called units for $10 each. The unit consists of a common share, which is regular stock, and a derivative called a warrant. Warrants are call options and they allow investors to buy additional shares at specified “exercise” prices. After the merger with the shell company, both the shares and the warrants are listed and traded publicly. If some SPAC investors change their minds and do not want to participate in the merger with the shell company, they could redeem their shares and get back the $10 they paid for each. However, they can retain the warrants.
Yes, you read that correctly. When you buy a “unit” in a SPAC IPO, you can sell back the “unit” at the original purchase price and essentially keep the warrant–that is, the option to purchase stock at a locked-in lower price even if the stock price goes up–for free. The economic justification for this is that it provides an incentive for the SPAC sponsor to negotiate a good deal, because if the deal is perceived to be a bad one, the original money raised by the SPAC could evaporate. The warrants can be viewed as compensation for tying up your funds while the SPAC tries to negotiate a merger with a target firm. But this stock-plus-a-warrant-for-free structure is being criticized within the industry and has come under the eagle eye of regulators, and may not last.
For investors, the past record of SPACs looks good if you are part of the original IPO–that is, one of the people giving the SPAC a blank check–but not especially good if you are buying in as one of the “private investors in public equity” stage. The Gahng, Ritter, and Zhang team reports using data from 2010 up through May 2018: “While SPAC IPO investors have earned 9.3% per year, returns for investors in merged companies are more complex. Depending on weighting methods, they have earned -4.0% to -15.6% in the first year on common shares but 15.6% to 44.3% on warrants.”
SPACs in some form seem here to stay, in some form, unless the initial public offering rules are revised in a way that works better for young companies without a clear history of revenue growth. But they have now come under regulatory scrutiny. On April 8, John Coates at the Securities and Exchange Commission made a statement on “SPACs, IPOs and Liability Risk under the Securities Laws.” He began (footnotes omitted):
Over the past six months, the U.S. securities markets have seen an unprecedented surge in the use and popularity of Special Purpose Acquisition Companies (or SPACs). Shareholder advocates – as well as business journalists and legal and banking practitioners, and even SPAC enthusiasts themselves – are sounding alarms about the surge. Concerns include risks from fees, conflicts, and sponsor compensation, from celebrity sponsorship and the potential for retail participation drawn by baseless hype, and the sheer amount of capital pouring into the SPACs, each of which is designed to hunt for a private target to take public.With the unprecedented surge has come unprecedented scrutiny, and new issues with both standard and innovative SPAC structures keep surfacing.
N20 also known as laughing gas, does not get nearly the attention it deserves, says David Kanter, a nutrient pollution researcher at New York University and vice chair of the International Nitrogen Initiative, an organization focused on nitrogen pollution research and policy making. “It’s a forgotten greenhouse gas,” he says. Yet molecule for molecule, N20is about 300 times as potent as carbon dioxide at heating the atmosphere. And like CO2, it is long-lived, spending an average of 114 years in the sky before disintegrating. It also depletes the ozone layer. In all, the climate impact of laughing gas is no joke. IPCC scientists have estimated that nitrous oxide comprises roughly 6 percent of greenhouse gas emissions, and about three-quarters of those N20 emissions come from agriculture.
Global human-induced [N20] emissions, which are dominated by nitrogen additions to croplands, increased by 30% over the past four decades … This increase was mainly responsible for the growth in the atmospheric burden. Our findings point to growing N20 emissions in emerging economies—particularly Brazil, China and India. … The recent growth in N20 emissions exceeds some of the highest projected emission scenarios, underscoring the urgency to mitigate N20 emissions.
Yes, carbon dioxide is by far the most important greenhouse gas, with methane running second, and nitrous oxide third. These figures from the EPA illustrate greenhouse gas emissions for the US and for the world.
But that said, the climate change policy agenda needs to be multidimensional, addressing the issue from many different angles. In addition, with a need to expand agricultural productivity and output in many countries around the world, increased applications of fertilizer seem nearly certain, unless there is a concerted research-based effort to think about alternative approaches.
When economists study taxation, they typically separate two issues: one is the distributional issue of which groups are paying more or less; the other is the ways in which taxes reduce efficient economic incentives for work, savings, investment, innovation, and so on on. Today is the deadline for when US individual income tax returns are due with the federal Internal Revenue Service as well as with state-level income tax authorities. According to the research of Stefanie Stantcheva, most of those taxpayers focus almost entirely on the distributional question, not the efficiency question.
Consider the example of tax policy. Is it that people have different perceptions about the economic cost of taxes? Is it that they think differently about the distributional impacts that tax changes will have? Or is it that they have very different views of what’s fair and what’s not? Could the reason be their views on the government—how wasteful or efficient they think the government is? Or is it purely a lack of knowledge about how the tax system works and what inequality is?
I think of these factors as my explanatory or right-hand side variables. I can decompose a person’s policy views into these various components. What I find is that for tax policy, a person’s views on fairness, and who’s going to gain and lose from tax changes completely dominates all other concerns. This is followed by a person’s views of the government. How much do they think the government should be doing, how efficient is it, how wasteful is it, how much do they trust it? Efficiency concerns are actually quite second-order in people’s minds when it comes to tax policy.
These are all correlations. To see what’s actually causal and what could be shifting views, I show people these short ECON courses, which are two – or three-minute-long videos which explain how taxes actually work. The videos take different perspectives. Although they’re neutral and pedagogical, they don’t tell people what taxes should be or what’s fair or not. They just explain the how taxes work from one perspective. For instance, one version focuses only on the distributional impacts of taxes – who gains and who loses. The other version focuses only on the efficiency costs. Then there is the economist treatment, which shows both and emphasizes the trade-off between efficiency and equity. One can replicate this approach for the other policies such as health policy or trade or even climate change, which all have efficiency and equity considerations.
What I find for tax policy confirms the correlations. What shifts people’s views most is to see the distributional impacts of taxes, not at all the efficiency consequences of it. Even if you put it together and emphasize the trade-off, it’s still the distributional considerations that dominate and outweigh the efficiency concerns.
Distributional concerns about taxes matter to me, as well! But even if you generally agree on the idea that taxes should weigh more heavily on those with higher incomes or different wealth, it doesn\’t help to distinguish between different ways this might be be done.
For example, one might have higher marginal tax rates on those with higher income levels. One might reduce the value of tax deductions, like deductions for mortgage interest or state and local taxes, that tend to benefit those with high incomes more. One might insist that taxes be paid on currently untaxed fringe benefits, like employer-purchased health insurance, because exempting those benefits from income tax will provide greater benefit to those with higher incomes. One might want to alter rules that let people make tax-free contributions to retirement accounts, on the grounds that reducing taxes in this way will tend to benefit those with higher incomes. One might think about expansion of \”refundable\” tax provisions that help the working poor, like the Earned Income Tax Credit and the child tax credit. One might alter corporate taxes, on the theory that this would affect shareholders and top managers more than it will affect wages paid to average employees. One might alter the way in which capital gains are taxed, and one might want to distinguish between capital gains on owner-occupied housing, on family businesses, or on financial assets. One might alter the rules that let high-income people pass wealth to future generations. For example, the current rules are that when financial assets which have gained in value over the lifetime of the owner, those previous gains are not taxed when the asset is passed through an estate. One might want to change other rules on what assets can be passed to the next generation, including other aspects of the estate tax to rules about intergenerational giving, along with rules about using life insurance policies or charitable foundations to pass income between generations.
For those readers who are sunk deepest into distributional thinking I suspect the honest response to this list is something like: \”I\’m against anything that would raise my taxes by a single dime, but I\’m for anything that would only be paid by high-income, high-wealth individuals, and I don\’t care about how it affects their incentives.\” Of course, in a US economy where government debt is ascending to unprecedented levels even before we try to address the middle-term projected insolvency of Social Security and Medicare, that response is just an abdication of analysis.