Some Economics of Preventive Health Care

There’s an old dream about preventive health care, which I still hear from time to time. The hope is that by expanding the use of relatively cheap preventive care, then our society could either prevent some more extreme health conditions and/or catch and treat others early in a way that offers a double prize: it could conceivably improve both health and reduce total health care costs. But this happy outcome, while it may hold true in a few cases, is probably the wrong way to think about the economics of preventive medicine. 

Joseph P. Newhouse addresses these questions in “An Ounce of Prevention” (Journal of Economic Perspectives, Spring 2021, 35:2, 101-18).  By his estimate, only about 20% of preventive measures both improve health and save money. But when you think about it, most medical care doesn’t save money: instead, it costs something for the benefit of improving health. In the same way, a wide array of preventive case can be worth doing because it improves health, even if it does not (on average) save money. Newhouse writes (citations omitted): 

Vaccination is a well-known example of a measure that improves health and reduces cost. It is typically inexpensive, causes few adverse events, and can confer immunity for many years. The development of the polio vaccine, for example, was one of the great public health triumphs of the 20th century. In the late 1940s, polio crippled 35,000 Americans annually; because of vaccination, it was eradicated in the United States in 1979. Vaccination also differs from many other preventive measures because of the external benefit it confers on the unvaccinated (“herd immunity”). Another example of a preventive measure that saves money and improves health is a “polypill”—a single pill with several active ingredients for secondary prevention of heart disease versus single prescriptions for various agents.

The remaining 80 percent of preventive measures do not save money. The majority of all preventive measures—about 60 percent of them—provide health benefits at a cost of less than $100,000/QALY (2006 dollars). Another 10 percent of measures cost between $100,000 and $1,000,000 per QALY; those measures with costs near the lower end of this range might pass the common rules of thumb of cost-effectiveness … The remaining 10 percent of preventive measures studied in the literature either worsen expected health or, if they improve it, cost more than $1,000,000 per QALY.

(For the uninitiated, “QALY” stands for “quality-adjusted life year,” which is a way of measuring improvements in health.  In this measure, a year in perfect health counts as 1, but gaining a year of impaired health counts less than one. For an overview, see “What’s the Value of a QALY?“)

Moreover, even if one focuses on a particular preventive measure, it will often be true that the potential health/income payoff for screening some people is higher than others. For example, if one group of people has a genetic predisposition or certain behavioral factors that make certain health conditions more likely, screening those is more likely to pay off with gains. It’s quite possible to have situations where universal screening of all ages and groups may not be a cost-effective method of improving health, but screening higher-risk groups might make sense. 

Newhouse also emphasized that it’s useful to think about “preventive care” as meaning more than just medical interventions. For example, steps to reduce smoking and consumption of alcohol, or to encourage exercise, or to make sure that babies and small children have good nutrition, can have large payoffs. 

In addition, many of the steps used to address chronic health conditions are usefully thought of as “preventive case,” like taking medications for high blood pressure. Indeed, one of the major shifts over the last century or so in US health patterns is that back in 1900, diseases were the major cause of death. Today after dramatic improvements in vaccinations and public health conditions, chronic diseases are the main causes of death. A “chronic” health condition can be loosely defined as one where if you take your meds, and follow the recommendations about what you consume, you can live pretty much a normal life, but otherwise, you have good chance of ending up with sharp decline in in health and a costly period of hospitalization. In the US, those with three more more chronic health conditions account for 61% of all health care spending. Thus, preventive measures to prevent chronic conditions from turning into something worse (both medical and non-medical) hold considerable promise in reducing health care costs. Here’s a table from Newhouse: 

Newhouse also discusses the incentive for innovators to develop methods of preventive care vs. developing new treatments. He argues that clinical trials are often much faster for treatment: for example, think about a firm trying to test whether a treatment extends the life of existing cancer patients vs. a firm trying to test whether a preventive treatments will reduce the long-run risk of a certain cancer occurring in the first place. In addition, a firm thinking about developing a preventive care approach must be concerned that many who are low-risk, or view themselves as low-risk, won’t use the preventive care. However, if a firm develops a treatment for those who already have the health care condition, the chance of high demand for the product are much  better.  A companion paper to the Newhouse essay in the same issue of the JEP looks at the controversial issue of mammograms. Amanda E. Kowalski writes: “Mammograms and Mortality: How Has the Evidence Evolved?” (Journal of Economic Perspectives, Spring 2021, 35:2, 119-40). There have been substantial controversies over the years on the recommended age at which women should start and stop getting regular mammograms. For example, prior to 2009, the US Preventive Services Task Force recommended regular mammographies for women 40 and over. However, the current guidance is regular mammographies for women 50-74, while leaving the decision up to women and their doctors for those outside that age range. 
Why not just have universal mammograms for women of all ages? Sure, there’s some cost, but “better safe than sorry” and “more knowledge can only be a good thing,” right?  As she points out, it’s not that simple. Kowalski writes (citations omitted): The rationale for widespread mammography is that early detection of potentially fatal breast cancers enables earlier and more effective treatment. But there is a potential drawback: mammography can detect some early-stage cancers that will never progress to cause symptoms—a phenomenon often referred to as overdiagnosis. In such cases, the emotional, financial, and physical costs of a cancer diagnosis and any subsequent treatments occur without any corresponding health benefit. Because it is hard to tell which women will be harmed by their cancers, there is a tendency to treat all women as if their cancers will be lethal. Even if the initial cancer would have never proven life-threatening, exposure to chemotherapy, radiotherapy, and surgery can potentially lead to new conditions, even to new fatal cancers … Just to be clear, “overdiagnosis” is not what is know as a false positive–that is, a screening which finds something that isn’t there. Instead, “overdiagnosis” is finding something which is indeed there, but would not have caused a health problem. As she points out, a standard example is prostate cancer., and “autopsy studies showing that almost half of older men die with, but not necessarily of, prostate cancer have been important to prostate cancer screening guidelines since the late 1980s.” 
In practice, how big a problem is overdiagnosis from mammograms? There’s some controversy over this evidence, but Kowalski  makes a case that if the policy is to screen 100% of women in certain age ranges, the evidence (from randomized control trials done in Canada) is that over the long run, being randomly selected into the mammography group does not lead to an improvement in health on average–and may even be counterproductive. She writes that while most high-income countries still recommend regular mammography for asymptomatic women in their 50s and 60s, skepticism seems to be growing:.

Canadian national guidelines “recommend not screening” with mammography for women aged 40 to 49 but “recommend screening with mammography” for women aged 50 to 74. … Many other high income countries, including Australia, France, Switzerland, and the United Kingdom, do not recommend mammography for women in their 40s, and they also do not recommend against it as Canadian guidelines do. However, the Swiss Medical Board recommended steps to limit screening programs in 2014. In 2016, the French Minister of Health released results of an independent review that recommended that the national screening program end or undergo radical reforms,

Her recommendations are for some additional research into understanding the characteristics–other than age–that are likely to make a mammography beneficial for women. In addition, when a mammography does find cancer, it may in some cases be wise to reduce or postpone the use of the most aggressive possible treatments. 

Some Economics of James Buchanan

The Fraser Institute has been publishing a “Essential Scholars” series of short books that provide an overview of the work of prominent thinkers, including John Locke, David Hume, and Adam Smith in the past, and Friedrich Hayek, Joseph Schumpeter, and Robert Nozick from more recent times. The books seek to explain some main themes of these writers in straightforward nontechnical language. In the most recent contribution, Donald J. Boudreaux and Randall G. Holcombe have written  The Essential James Buchanan. The website even include several 2-3 minute cartoon videos, if you need a little help in spotting the main themes.

Buchanan won the Nobel prize in 1986 “for his development of the contractual and constitutional bases for the theory of economic and political decision-making.” Boudreaux and Holcombe argue for Buchanan, these theme emerge from a perspective in which group decisions–whether by government or clubs or religious organizations–must always be traced back to what form of agreement was reached by members of the group. In describing Buchanan’s view, they write: 

[B]ecause neither the state nor society is a singular and sentient creature, a great deal of analytical and policy confusion is spawned by treating them as such. Collections of individuals cannot be fused or aggregated together into a super-individual about whom economists and political philosophers can usefully theorize in the same ways that they theorize about actual flesh-and-blood individuals. Two or more people might share a common interest and they might—indeed, often do—join forces to pursue that common interest. But two or more people are never akin to a single sentient individual. A collection of individuals, as such, has no preferences of the sort that are had by an actual individual. A collection of individuals, as such, experiences no gains or pains; it reaps no benefits and incurs no costs. A collection of individuals, as such, makes no choices. …


Buchanan called such aggregative thinking the “organismic” notion of collectives—that is, the collective as organism. From the very start, nearly all of Buchanan’s lifetime work was devoted to replacing the organismic approach with the individualistic one—a way of doing economics and political science that insists that choices are made, and costs and benefits are experienced, only by individuals. uchanan took this distinction so seriously that, as I’ll discuss below, he proposed renaming the field of economics to highlight it. When thinking about people coming together to take joint actions, whether they are buying and selling in a market, or starting a company, or operating together through government, Buchanan insisted on viewing the process not as actions taken by “the government,” but rather as the outcome of negotiations by groups of individuals.  Boudreaux and Holcombe write: 

Buchanan’s fiscal-exchange model of government depicts government as an organization through which individuals come together collectively to produce goods and services they cannot easily acquire through market exchange. Just as individuals trade in markets for their mutual benefit, government facilitates the ability of individuals to engage in collective exchange for the benefit of everyone. This fiscal-exchange model is an ideal, of course; Buchanan was well aware of the possibility that those who exercise government power can and often do abuse it for their own benefit at the expense of others. Much of his work was devoted to understanding how government can be constrained in order to keep this abuse to a minimum. When those constraints are effective, collective action through government can further everyone’s well-being. The fiscal-exchange model is based on the idea that taxes are the price citizens pay for government goods and services. And just like prices in the marketplace, the value of the goods and services government supplies should exceed the prices citizens pay, in the form of taxes, for these goods and services. … 

“[W]hen analyzing the groups that individuals form when they come together to pursue collective outcomes, Buchanan insisted that close attention be paid to the details of how these individuals constitute themselves as a group—and most especially, to the decision-making procedures they choose
for their group.” Here are some examples.

Buchanan was a big supporter of federalism: that is, the idea that government responsibilities would be divided up into local, state, national, and perhaps other intermediate levels. “Buchanan refers to federalism as     an ideal political order’ with several advantages … Federalism offers citizens more choice, because citizens can choose among jurisdictions,” while ” governments at the same level in a federal system thus each have stronger incentives to provide a mix and pricing of public goods that is attractive to large numbers of people. … “In addition, federalism can encourage governments at different levels to police each other.”
One of Buchanan’s main policy concerns was governments were prone to over-borrowing, because the future generations that would need to repay the debts were not well-represented in current discussions about the extent of borrowing. Boudreax and Holcombe write: 

This ability of current taxpayers to use debt financing to free-ride on the wealth of future generations led Buchanan to worry that government today will both spend excessively and fund too many projects with debt. citizen-taxpayers, after all, are not today’s voters. Thus, the interests of these future generations are under-represented in the political process. To reduce the magnitude of this problem, Buchanan endorsed constitutional rules that oblige governments to annually keep their budgets in balance. His fear that the opportunity for debt financing of government projects and programs would be abused was so acute that it led him to endorse a balanced-budget amendment to the US Constitution. His participation in a political effort to secure such an amendment is one of the very few specific, ground-level policy battles that he actively joined.

As one more example, Buchanan wrote an article for the first issue in Summer 1987of the Journal of Economic Perspectives, where I work as Managing Editor, as part of a symposium on the Tax Reform Act of 1986  (“Tax Reform as Political Choice.” Journal of Economic Perspectives, 1:1, 29-35). For those unfamiliar with the bill, the general thrust of TRA86 was to broaden the tax base by closing or limiting various tax deductions and exemptions, and then to reduce marginal tax rates in a roughly revenue-neutral manner. This advice to broaden the tax base and reduce marginal tax rates is pretty standard, year in and year out, among mainstream public finance economists. But what made it possible for such legislation to actually be enacted in 1986?


Buchanan suggested that there is a cycle to tax policy. Say that you start off in a situation with a broad tax base and few loopholes. Over time, politicians and special interests will carve out a series of tax breaks. But every time they reduce the base of income that is taxed, they will be forced to raise marginal tax rates as well to garner the same amount of revenue. At some point, Buchanan argued, marginal tax rates have become so high that a countermovement forms. Essentially, the countermovement is willing to give up some tax loopholes of its own, as long as many other parties also need to give up their tax loopholes, in exchange for lower tax rates. As soon as this bargain is enacted into law, as in 1986, the political business of carving out loopholes begins all over again. 

Thus, Buchanan did not view  public policy as an attempt to reach a higher level of social welfare or a more efficient allocation of resources. These kinds of goals would be what Buchanan disparaging called “organismic.” Instead, Boudreaux and Holcombe described Buchanan’s view of the political process in this way: 

Economic and political outcomes are compromises among people with legitimate differences in their preferences. These outcomes can never be correct or incorrect in the same way that an answer to the question “What is the speed of light?” is correct or incorrect. The correct answer to the question about the speed of light is not a compromise among different answers offered by different physicists—the speed of light is what it is, objectively, regardless of physicists’ estimates of it. But the “correct” allocation of resources and “correct” level of protection of free speech are indeed nothing more than the compromises that emerge from the economic and political bargaining of many individuals, each with different preferences. In short, said Buchanan, politics is about finding peaceful agreements among people with different preferences on collective outcomes. Politics, unlike science, is not about making “truth judgments.” The challenge is to discover and use the set of rules that best promotes the making of compromises among people with different preferences. Legitimate scientific inquiry and judgment can play a role in assessing how well or poorly some existing or proposed set of rules will serve this goal. Even here, though, Buchanan warned that people’s differences in fundamental values means that there is no universal one “best” set of rules, scientifically discoverable, for all peoples and for all times. In the end, the best set of rules is that which wins the unanimous approval of the people who will live under it.

Notice here that the unanimous approval will not be for the outcomes of decisions by organizations. People will disagree over outcomes. Instead, Buchanan is suggesting that we might agree to a set of rules, and we might be willing to be coerced under those rules. As Boudreaux and Holcombe describe it:

In this situation, individuals might agree to be forced to pay toward financing the [public] good if everyone else is also forced to pay. Everyone could hold the same opinion, saying they do not want to pay unless everyone is forced to pay, but they would all agree to a policy that forces everyone to pay. People could agree to be coerced. The idea that people could agree to be coerced lies at the foundation of the social-contract theory of the state. Even though there is no actual contract, people would agree to give the state the authority to coerce those who violate its mandates, if everyone was bound to the same contract provisions. According to social-contract theory, because people would agree to be coerced for their own benefit, the exercise of such coercion violates no individual’s rights.

Buchanan extended this individual-based contractual view of organizations beyond government, and beyond  market exchange: 

The point is that exchange possibilities are not confined to the simple bilateral exchanges on which economists traditionally focus nearly all of their attention. When this truth is recognized, many familiar features of the real world are seen in a more revealing light. Clubs, homeowners’ associations, business firms, churches, philanthropic organizations—these and other voluntary associations are arrangements in which individuals choose to interact and exchange with each other in ways more complex than simple, one-off, arm’s length, bilateral exchanges. These “complex” exchange relationships are an  important reality for economists to study. But they are more than mere subject matter for research. They are also evidence that human beings who are free to creatively devise and experiment with alternative organizational and contractual arrangements have great capacity to do so. Where the conventional economist sees “market failure,” humans on the spot often see opportunities for mutually advantageous
exchange.

Buchanan felt so strongly about this position that in a 1964 essay, he suggested renaming the field of economics (“What Should Economists Do? Southern Economic Journal, 30:3 pp. 213-222). Boudreaux and Holcombe discuss this essay in their Chapter 10: here, I quote from the 1964 essay. Buchanan argued that the current definition of economics is much too identified with the idea of choice. He wrote in 1964:  

In one sense, the theory of choice presents a paradox. If the utility function of the choosing agent is fully defined in advance, choice becomes purely mechanical. No “decision,” as such, is required; there is no weighing of alternatives. On the other hand, if the utility function is not wholly defined, choice becomes real, and decisions become unpredictable mental events. If I know what I want, a computer can make all of my choices for me. If I do not know what I want, no possible computer can derive my utility function since it does not really exist.

Rather than basing economics on an idea of utility functions that do not actually exist until they are called into being by people’s choices, Buchanan suggested instead that economics should instead be focused on the principle of voluntary exchange, and the conditions that people agree to in shaping such exchanges. He wrote: 
The theory of choice must be removed from its position of eminence in the economist’s thought processes. The theory of choice, of resource allocation, call it what you will, assumes no special role for the economist, as opposed to any other scientist who examines human behavior. Lest you get overly concerned, however, let me hasten to say that most, if not all, of what now passes muster in the theory of choice will remain even in my ideal manual of instructions. I should emphasize that what I am suggesting is not so much a change in the basic content of what we study, but rather a change in the way we approach our material. I want economists to modify their thought processes, to look at the same phenomena through “another window,” to use Nietzsche’s appropriate metaphor. I want them to concentrate on “exchange” rather than on “choice.” 
The very word “economics,” in and of itself, is partially responsible for some of the intellectual confusion. The “economizing” process leads us to think directly in terms of the theory of choice. I think it was Irving Babbit who said that revolutions begin in dictionaries. Should I have my say, I should propose that we cease, forthwith, to talk about “economics” or “political economy,” although the latter is the much superior term. Were it possible to wipe the slate clean, I should recommend that we take up a wholly different term such as “catallactics,” or “symbiotics.” The second of these would, on balance, be preferred. Symbiotics is defined as the study of the association between dissimilar organisms, and the connotation of the term is that the association is mutually beneficial to all parties. This conveys, more or less precisely, the idea that should be central to our discipline. It draws attention to a unique sort of relationship, that which involves the co- operative association of individuals, one with another, even when individual interests are different. It concentrates on Adam Smith’s “invisible hand,” which so few non-economists properly understand.I am uncertain as to what the practictioners of catallactics or symbiotics would be called. “Catallacticologists?” “Catalysts?” “Symbioticians?” “Symbiotes?” I’m open to suggestions.
If you would like some additional background on Buchanan,  two starting point are this 1988 article looking at Buchanan’s contributions just after he won the Nobel prize, and this earlier post from me just after Buchanan died in early 2013

The Shrinking Role of European Companies in the Global Economy

The Economist titled its article on European corporations: “The land that ambition forgot. Europe is now a corporate also-ran. Can it recover its footing?” (June 5, 2021). The article is well worth reading, but here are a couple of snapshots and my own reactions. Notice in particular that these changes are fairly recent. The horizontal axis in these graphs starts only two decades ago. 
The share of EU companies among the largest in the world has been declining: “In 2000 nearly a third of the combined value of the world’s 1,000 biggest listed firms was in Europe, and a quarter of their profits. In just 20 years those figures have fallen by almost half.”

Here’s the EU share of the global economy, and also the stock market capitalization of EU companies compared to global stock market capitalization. The message here is not just that both shares have declined substantially. Notice also that back around 2000 the EU share of the global economy and the share of the EU in global stock market capitalization were roughly the same, but that is no longer true. 

Europe has often been a world leader in drawing up rules and regulations that companies must follow, in areas including digital privacy, environmental protection, use of genetic modification technologies, and so on.  However, the EU countries have overall, judging by performance, have not been an especially friendly place to start or run a company, The European Union itself remains a fractured economic zone, separated by barriers set up by national governments, as well as by language and cultural differences.  The Economist writes that big EU companies have in recent decades been preferring to expand their sales and operations overseas, rather than in their home base. 
Companies are social mechanisms both for organizing current and future production, and also for planning and investment needed for future innovations in production methods and new products. Europe has a smaller share of these engines of production. 

Why Have Mortality Rates Been Rising for US Working-Age Adults?

The mortality rate for “working age” US adults in the 25-64 age group has been rising. This isn’t a pandemic-related issue, but instead something with roots in the data going back several decades. The National Academies of Sciences, Engineering, and Medicine digs into the underlying patterns and potential explanations in “High and Rising Mortality Rates Among Working-Age Adults” (March 2021, a prepublication copy of uncorrected proofs can be downloaded for free). Their evidence and discussion is mainly focused on the period up through 2017. 

The NAS report compares the US to 16 “peer countries,” which are other countries with a high level of per capita income and well-developed health care systems. (The 16 countries are Australia, Austria, Canada, Denmark, Finland, France, Germany, Italy, Japan, Norway, Portugal, Spain, Sweden, Switzerland, the Netherlands, and the United Kingdom.) The two panels below compare life expectancy going back to 1950 for females and males in the US (red line) and the average of the peer countries (blue line). The almost-invisible gray lines show each of the 16 peer countries separately. 

For US women, life expectancy was slightly above that of the peer group in 1950, but starting around 1980 a divergence began. For US men, life expectancy was similar to the peer group but a divergence also began in the 1980s. For both US men and women, life expectancy seems to have flattened out in the last decade or so. 
The report also does a breakdown of the same data by racial/ethnic status. In this figure, the red line shows only white US females and males. The dotted line shows non-white Hispanics, and is available only for recent years, but it pretty much overlaps the peer group. The  dashed line shows black Americans. There is still a life expectancy gap between white and black Americans, but the gap has generally been declining over time. The levelling out of US life expectancies for the 25-64 age group in the last few decades has been largely a phenomenon affecting white Americans. 

Notice that this comparison is not about infant or child mortality rates, nor is it about life expectancy for the elderly. Indeed, life expectancies for US infants and for children under the age of 10, and for adults who have already reached their 80s  are higher than for the peer group of countries. It’s the in-between age group where the difference arises. 
As one digs into these patterns more closely, here are some of the details that emerge:

The committee identified three categories of causes of death that were the predominant drivers of trends in working-age mortality over the period: (1) drug poisoning and alcohol-induced causes, a category that also includes mortality due to mental and behavioral disorders, most of which are drug- or alcohol-related; (2) suicide; and (3) cardiometabolic diseases. The first two of these categories comprise causes of death for which mortality increased, while the third encompasses some conditions (e.g., hypertensive disease) for which mortality increased and others (e.g., ischemic heart disease) for which the pace of declining mortality slowed. …

[I]ncreasing mortality among U.S. working-age adults is not new. The committee’s analyses confirmed that a long-term trend of stagnation and reversal of declining mortality rates that initially was limited to younger White women and men (aged 25–44) living outside of large central metropolitan areas (seen in women in the 1990s and men in the 2000s), subsequently spread to encompass most racial/ethnic groups and most geographic areas of the country. As a result, by the most recent period of the committee’s analysis (2012–2017), mortality rates were either flat or increasing among most working-age populations. Although this increase began among Whites, Blacks consistently experienced much higher mortality. …

Over the 1990–2017 period, disparities in mortality between large central metropolitan and less-populated areas widened (to the detriment of the latter), and geographic disparities became more pronounced. Mortality rates increased across several regions and states, particularly among younger working-age adults, and most glaringly in central Appalachia, New England, the central United States, and parts of the Southwest and Mountain West. Mortality increases among working-age (particularly younger) women were more widespread across the country, while increases among men were more geographically concentrated.

Regarding socioeconomic status, the committee’s literature review revealed that a large number of studies using different data sources, measures of socioeconomic status, and analytic methods have convincingly documented a substantial widening of disparities in mortality by socioeconomic status among U.S. working-age Whites, particularly women, since the 1990s. Although fewer studies have examined socioeconomic disparities in working-age mortality among non-White populations, those that have done so show a stable but persistent gap in mortality among Black adults that favors those of higher socioeconomic status.

Many of these factors overlap in various ways, and the subject as a whole is not well-understood. A substantial portion of the NAS report is a call for additional research. But if I had to extrapolate from the available data, one pattern that seems common is about parts of the US feeling separated and isolated, either by urban/nonurban status or by socioeconomic status. The specific causes of death contributing to the pattern seem to share the trait that they are potentially worsened by life and economic stress. 
Although the report is about long-term trends, not the pandemic, it does offer the insight that COVID-19 has added to the disparity of mortality rates for working-age adults. Yes,  the elderly accounted for by far the largest share of COVID-19 deaths. But if one looks in percentage terms, the report notes: 

Thus, COVID-19 has reinforced and exacerbated existing mortality disparities within the United States, as well as between the United States and its peer countries. The CDC reported that adults aged 25–44 experienced the largest percentage increases in excess deaths during the pandemic (as of October 2020). 

What is Complexity Economics?

What distinguishes “complexity economics”? W. Brian Arthur offers a short readable overview in “Foundations of complexity economics” (Nature Reviews Physics 3: 136–145, 2021).  This is a personal essay, rather than a literature review. For example, Arthur explains how the modern research agenda for complexity economics emerged from work at the Santa Fe Institute in the late 1980s.
How is complexity economics different from regular economics? Complexity economics sees the economy — or the parts of it that interest us — as not necessarily in equilibrium, its decision makers (or agents) as not superrational, the problems they face as not necessarily well-defined and the economy not as a perfectly humming machine but as an ever-changing ecology of beliefs, organizing principles and behaviours. How does a researcher do economics in this spirit? A common approach is to describe, in mathematical terms, a number of decision-making agents within a certain setting. The agents start off with a range of rules for how they will perceive the situation and how they will make decisions. The rules that any given agent uses can change over time: the agent might learn from experience, or might decide to copy another agent, or the decision-making rule might experience a random change. The researcher can then look at the path of decision-making and outcomes that emerge from this process–a path which will sometimes settle into a relatively stable outcome, but sometimes will not. Arthur writes: Complexity, the overall subject , as I see it is not a science, rather it is a movement within science … It studies how elements interacting in a system create overall patterns, and how these patterns, in turn, cause the elements to change or adapt in response. The elements might be cells in a cellular automaton, or cars in traffic, or biological cells in an immune system, and they may react to neighbouring cells’ states, or adjacent cars, or concentrations of B and T cells. Whichever the case, complexity asks how individual elements react to the current pattern they mutually create, and what patterns, in turn, result.As Arthur points out, an increasingly digitized world is likely to offer a number of demonstrations of complexity theory at work. 

Now, under rapid digitization, the economy’s character is changing again and parts of it are becoming autonomous or self- governing. Financial trading systems, logistical systems and online services are already largely autonomous: they may have overall human supervision, but their moment-to-moment actions are automatic, with no central controller. Similarly, the electricity grid is becoming autonomous (loading in one region can automatically self- adjust in response to loading in neighbouring ones); air-traffic control systems are becoming autonomous and independent of human control; and future driverless-traffic systems, in which driverless-traffic flows respond to other driverless-traffic flows, will likely be autonomous. … Besides being autonomous, they are self- organizing, self- configuring, self-healing and self- correcting, so they show a form of artificial intelligence. One can think of these autonomous systems as miniature economies, highly interconnected and highly interactive, in which the agents are software elements ‘in conversation with’ and constantly reacting to the actions of other software elements.

To put it another way, if we want to understand when these kinds of systems are likely to work well, and how they might go off the rails or be gamed, complexity analysis is likely to offer some useful tools. 

But what about using complexity theory for economics in particular? As Arthur writes: “A new theoretical framework in a science does not really prove itself unless it explains phenomena that the accepted framework cannot. Can complexity economics make this claim? I believe it can. Consider the Santa Fe artificial stock market model.”
For example, there’s a long-standing issue of why stock markets see short-run patterns of boom and bust.  Another puzzle of stock markets is why there is so much trading of stocks. Sure, stock traders will disagree about the underlying value of stocks and about the meaning of recent news which affects perceptions of future value. Such disagreements will lead to a modest volume stock trading, but it’s hard to see how they lead to the extremely high volumes of trading seen in modern markets. John Cochrane phrased this point nicely in a recent interview with Tyler Cowen

Why is there this immense volume of trading? When was the last time you bought or sold a stock? You don’t do it every 20 milliseconds, do you? I’ll highlight this. If I get my list of the 10 great unsolved puzzles that I hope our grandchildren will have figured out, why does getting the information into asset prices require that the stock be turned over a hundred times? That’s clearly what’s going on. There’s this vast amount of trading, which is based on information or opinion and so forth. I hate to discount it at all just as human folly, but that’s clearly what’s going on, but we don’t have a good model.

Here is Arthur’s description of how complexity economics looks at these stock market puzzles: We set up an ‘artificial’ stock market inside the computer and our ‘investors’ were small, intelligent programs that could differ from one another. Rather than share a self- fulfilling forecasting method, they were required to somehow learn or discover forecasts that work. We allowed our investors to randomly generate their own individual forecasting methods, try out promising ones, discard methods that did not work and periodically generate new methods to replace them. They made bids or offers for a stock based on their currently most accurate methods and the stock price forms from these — ultimately, from our investors’ collective forecasts. We included an adjustable rate-of-exploration parameter
to govern how often our artificial investors could explore new methods.

When we ran this computer experiment, we found two regimes, or phases. At low rates of investors trying out new forecasts, the market behaviour collapsed into the standard neoclassical equilibrium (in which forecasts converge to ones that yield price changes that, on average, validate those forecasts). Investors became alike and trading faded away. In this case, the neoclassical outcome holds, with a cloud of random variation around it. But if our investors try out new forecasting methods at a faster and more realistic rate, the system goes through a phase transition. The market develops a rich psychology of different beliefs that change and do not converge over time; a healthy volume of trade emerges; small price bubbles and temporary crashes appear; technical trading emerges; and random periods of
volatile trading and quiescence emerge. Phenomena we see in real markets emerge. … 
I want to emphasize something here: such phenomena as random volatility, technical trading or bubbles and crashes are not ‘departures from rationality’. Outside of equilibrium, ‘rational’ behaviour is not well- defined. These phenomena are the result of economic agents discovering behaviour that works temporarily in situations caused by other agents discovering behaviour that works temporarily. This is neither rational nor irrational, it merely emerges.
Other studies find similar regime transitions from equilibrium to complex behaviour in nonequilibrium models. It could be objected that the emergent phenomena we find are small in size: price outcomes in our artificial market diverge from the standard equilibrium outcomes by only 2% or 3%. But — and this is important — the interesting things in real markets happen not with equilibrium behaviour but with departures from equilibrium. In real markets, after all, that is where the money is made.In other words, the key to understanding dynamics of stock markets may reside in the idea that investors are continually exploring new methods of investing, which in turn leads to high volumes of trading and in some cased to dysfunctional outcomes.  Of course, Arthur offers a variety of other examples, as well. 

For those who would like more background on complexity economics, one starting point would be the footnotes in Arthur’s article. Another place to start is the essay by J. Barkley Rosser, “On the Complexities of Complex Economic Dynamics,” in the Fall 1999 issue of the Journal of Economic Perspectives (13:4, 169-192). The abstract reads: 

Complex economic nonlinear dynamics endogenously do not converge to a point, a limit cycle, or an explosion. Their study developed out of earlier studies of cybernetic, catastrophic, and chaotic systems. Complexity analysis stresses interactions among dispersed agents without a global controller, tangled hierarchies, adaptive learning, evolution, and novelty, and out-of-equilibrium dynamics. Complexity methods include interacting particle systems, self-organized criticality, and evolutionary game theory, to simulate artificial stock markets and other phenomena. Theoretically, bounded rationality replaces rational expectations. Complexity theory influences empirical methods and restructures policy debates.