Interview with Matthew Jackson: Human Networks

David A. Price does an \”Interview\” with Matthew Jackson, with the subheading \”On human networks, the friendship paradox, and the information economics of protest movements\” (Econ Focus: Federal Reserve Bank of Richmond, 2021, Q1, pp. 16-20). Here are a few snippets of the conversation, suggestive of the bigger themes.

Homophily

[O]ne key network phenomenon is known among sociologists and economists as homophily. It\’s the fact that friendships are overwhelmingly composed of people who are similar to each other. This is a natural phenomenon, but it\’s one that tends to fragment our society. When you put this together with other facts about social networks — for instance, their importance in finding jobs — it means many people end up in the same professions as their friends and most people end up in the communities they grew up in.

From an economic perspective, this is very important, because it not only leads to inequality, where getting into certain professions means you almost have to be born into that part of society, it also means that then there\’s immobility, because this transfers from one generation to another. It also leads to missed opportunities, so people\’s talents aren\’t best matched to jobs.

The Friendship Paradox

This concerns another network phenomenon, which is known as the friendship paradox. It refers to the fact that a person\’s friends are more popular, on average, than that person. That\’s because the people in a network who have the most friends are seen by more people than the people with the fewest friends.

On one level, this is obvious, but it\’s something that people tend to overlook. We often think of our friends as sort of a representative sample from the population, but we\’re oversampling the people who are really well connected and undersampling the people who are poorly connected. And the more popular people are not necessarily representative of the rest of the population.

So in middle school, for example, people who have more friends tend to have tried alcohol and drugs at higher rates and at earlier ages. And this distorted image is amplified by social media, because students don\’t see pictures of other students in the library but do tend to see pictures of friends partying. This distorts their assessment of normal behavior.

There have been instances where universities have been more successful in combating alcohol abuse by simply educating the students on what the actual consumption rates are at the university rather than trying to get them to realize the dangers of alcohol abuse. It\’s powerful to tell them, \”Look, this is what normal behavior is, and your perceptions are actually distorted. You perceive more of a behavior than is actually going on.\”

Causality in Networks

Establishing causality is extremely hard in a lot of the social sciences when you\’re dealing with people who have discretion over with whom they interact. If we\’re trying to understand your friend\’s influence on you, we have to know whether you chose your friend because they behave like you or whether you\’re behaving like them because they influenced you. So to study causation, we often rely on chance things like who\’s assigned to be a roommate with whom in college, or to which Army company a new soldier is assigned, or where people are moved under a government program that\’s randomly assigning them to cities. When we have these natural experiments that we can take advantage of, we can then begin to understand some of the causal mechanisms inside the network.

Live Protests vs. Social Media

[I]t\’s cheap to post something; it\’s another thing to actually show up and take action. Getting millions of people to show up at a march is a lot harder than getting them to sign an online petition. That means having large marches and protests can be much more informative about the depth of people\’s convictions and how many people feel deeply about a cause.

And it\’s informative not only to governments and businesses, but also to the rest of the population who might then be more likely to join along. There are reasons we remember Gandhi\’s Salt March against British rule in 1930 or the March on Washington for Jobs and Freedom in 1963. This is not to discount the effects that social media postings and petitions can have, but large human gatherings are incredible signals and can be transformative in unique ways because everybody sees them at the same time together with this strong message that they convey.

If you would like more Jackson, one starting point is his essay in the Fall 2014 issue of the Journal of Economic Perspectives, \”Networks in the Understanding of Economic Behaviors.\” The abstract reads:

As economists endeavor to build better models of human behavior, they cannot ignore that humans are fundamentally a social species with interaction patterns that shape their behaviors. People\’s opinions, which products they buy, whether they invest in education, become criminals, and so forth, are all influenced by friends and acquaintances. Ultimately, the full network of relationships—how dense it is, whether some groups are segregated, who sits in central positions—affects how information spreads and how people behave. Increased availability of data coupled with increased computing power allows us to analyze networks in economic settings in ways not previously possible. In this paper, I describe some of the ways in which networks are helping economists to model and understand behavior. I begin with an example that demonstrates the sorts of things that researchers can miss if they do not account for network patterns of interaction. Next I discuss a taxonomy of network properties and how they impact behaviors. Finally, I discuss the problem of developing tractable models of network formation.

The Great Texas Power Failure of February 2021

In the aftermath of the Texas power failures in February, a number commenters found confirmation that, amazingly, they had been right about everything all along. Thus, those who were against wind power and renewable energy mandated in general blamed the wind farms. Those who are suspicious of competition in markets for generating electrical power blamed deregulation, although blaming \”the market\” for what happens in a heavily regulated industry seems peculiar to me. Some critics even blamed Enron, a company that has not existed for years. What actually happened seems simpler, if less reinforcing for various preconceptions: It got really cold. 

Michael Giberson provides an overview in \”Texas Power Failures: What happened in February 2021 and What Can be Done\” (Reason Foundation, April 2021). He describes the weather: 

The temperature in Dallas dipped to -2° F, the coldest it had been in Dallas for 70 years. Snow fell on the beaches on the Gulf Coast at Galveston, south of Houston. Temperatures in Austin remained below freezing for six days at a time of when temperatures usually average in the mid-50s. At Brownsville, near the most southern tip of Texas, February weather typically averages 65° F. High temperatures in Brownsville were in the mid-80s just days before the cold. The temperature in the city did not rise above freezing for nearly 48 hours once the cold settled in. For the first time in history all 254 counties in Texas were under a winter storm warning at the same time. The cold was not unprecedented at any particular location, but it was extreme, widespread, and long lasting in February 2021. …

The cold affected more than the ERCOT power system. Some power systems in Texas not within the ERCOT system also resorted to rolling outages. Natural gas production and distribution froze up. Municipal water mains froze in cities across the South. Ranchers in the Panhandle lost cattle to the cold. Citrus growers in South Texas saw damage to trees that may last for years. Roads were closed due to ice and storms. Failures were not solely an electric power industry concern or a natural gas failure. The cold was simply worse than almost anyone in Texas was prepared for. … Clearly, it was not negligent on ERCOT’s part—and maybe anyone’s part—to fail to anticipate such anomalous temperatures.

The  Texas power emergency lasted about four days: at its worst, about 4.5 million people were without power.

Of course, the obvious question is why ERCOT–the ironically named Electric Reliability Council of Texas which has responsibility for regulating Texas electricity—had not already required larger investments against cold weather. After all, there had been a cold snap back in 2011 that also caused power outages, although it was not as extreme as the February 2011 version. The short answer is that the weather turned out to be colder than ERCOT\’s worst-case scenario. Here\’s a figure that takes a bit of explaining, but tells much of the story: 

The black line shows the actual electricity load. The thin gray line shows the forecasted demand, if ERCOT had been able to deliver it. In general, electricity demand is typically  higher in Texas in the summer (air-conditioning) rather than the winter. But electricity demand during the cold snap would have broken the all-time summer records, as well as the winter ones. 

But the real problem was on the supply side. The ERCOT \”extreme\” scenario was that 14 GW of electricity would go off-line; actually, 30 GW went off-line.  The blue dashed horizontal line bottom line shows the 2 GW that ERCOT projected for wind and solar power in its \”extreme\” scenario. There were a couple of small dips below this level, but a drop in wind power was not the main culprit here. 
In retrospect, some of the problem was poor coordination across the energy system. For example, some natural gas pipeline operators failed to submit the information to their electricity providers so that they could be treated as \”critical load\” functions, which meant that they couldn\’t deliver natural gas to generate electricity:\” \”At its worst, as much as 9,000 MW of generation was sidelined by the lack of gas supplies, in part due to power cut offs at gas pipelines.\” The drop in electricity produced from natural gas was by far the biggest source of the overall drop in supply. 
Other than better coordination of electricity supplies, what else might be done to avoid similar power failures in the future. The key point here to remember is that we are talking about preparation for a very rare event. 
One option is to invest more in weatherization. Another is to pay some firms for being ready to provide a certain amount of electricity in an emergency, even if most of the time they are not actually doing so–that is, pay for some extra unused capacity. Another option is to build more connections from ERCOT to electricity grids outside of Texas, which could be very valuable in emergencies even if they aren\’t used much of the time. Yet another option would be to encourage Texas electricity users to maintain some of their own battery storage or generating capacity for emergencies. 
Hindsight is 20:20, but now that Texas has been warned by experience, the case for some mixture of these actions is strong. Garrett Golding, Anil Kumar and Karel Mertens at the Dallas Federal Reserve offer some estimates in \”Cost of Texas’ 2021 Deep Freeze Justifies Weatherization\” (Dallas Fed Economics blog, April 15, 2021). In measuring economic losses from the power outage, they write: 

The power outages led to widespread damage to homes and businesses, foregone economic activity, contaminated water supplies and the loss of at least 111 lives. Early estimates indicate that the freeze and outage may cost the Texas economy $80 billion–$130 billion in direct and indirect economic loss. These initial calculations come with significant uncertainty. Estimates of insured losses, which are easier to quantify, range from $10 billion to $20 billion.

In terms of what steps might be taken, they note: 

Winterizing standards on new oil and gas wells may offer a targeted and effective approach in the long run. Due to the high initial productivity of shale wells, new wells will eventually make up a large share of overall production. Many companies already implement winterizing measures. With winterizing equipment costing between $20,000 and $50,000 per well, we estimate these measures statewide would total $85 million–$200 million annually. A large and perhaps inexpensive fix would be prioritizing electricity delivery to gas infrastructure. If power plant and pipeline operators improve coordination to identify and constantly monitor the gas infrastructure requiring such prioritization, some of the problems experienced during the freeze could be prevented.

It\’s also possible to winterize the wind farms that received so much attention. They write about the possibilities of \”upgraded blade coatings, cold-weather lubricants and de-icing drones.\” 

Overall, the lesson here seems to be to move past the blame game, to accept that the cold weather snap was unprecedented, and now to have Texans pay a little more for electricity to fund these kinds of steps. 

China’s Population: The Coming Decline

Pity China\’s statisticians, who seem caught in the middle between the facts of arithmetic and the demands of government. The recent kerfuffle started about a week ago when the Financial Times reported, \”China set to report first population decline in five decades\” (April 27, 2021).  The report was based on \”people familiar with the research,\” who presumably had some insight into the results of China\’s population census that was completed last December, with data schedule for released in April. 

But apparently this census data was too hot to handle. The FT quoted Huang Wenzheng, a fellow at the a Beijing-based think-tank, who said: \”The census results will have a huge impact on how the Chinese people see their country and how various government departments work. They need to be handled very carefully.” A few days later, China\’s National Bureau of Statistics released a one-sentence statement on its website: According to our understanding, in 2020, China’s population continued to grow.\” The actual numbers have not yet been released. 

I have no inside information about the details of China\’s population numbers for 2020. But I do know that demographic patterns can be inexorable. China made decisions a half-century in the early 1970s about pursuing an aggressive policy of population control, decisions that evolved into the stringently enforced \”one child\” policy adopted in 1979.  A common saying in China was that it takes six adults to raise one child: that is, the two sets of grandparents who had one child each, and whose offspring then produced a single grandchild. 

When low birthrates have been in motion for several generations, population growth will eventually drop below replacement rates.  Japan\’s population began shrinking about a decade ago. Russia\’s population peaked back in the 1990s. A group of demographers looking at global population forecasts predicted in the Lancet that by 2100: \”In 23 countries, including Japan, Thailand, Spain, and Ukraine, populations are expected to decline by 50% or more. Another 34 countries will probably decline by 25–50%, including China, with a forecasted 48.0% decline.\”

In short, it\’s not clear if China\’s population peaked last year, or if the peak will happen in the next few years, but at some point, the fact of the peak will be undeniable. Indeed, China\’s National Bureau of Statistics has already been reporting for several years that the size of China\’s working-age population aged between 15-64 peaked back in 2014. It has been apparent for some years now that China faces a challenge as to whether the country will become old before it becomes rich. 

There are multiple ironies here. One is that after decades of pushing hard for population control, China\’s government has now apparently decided that a declining population would be an undesirable outcome. Another irony is that China\’s repressive and draconian one-child policy may had relatively little effect.  Countries around the world and in east Asia have experienced rapidly falling birthrates in recent decades without anything like China\’s one-child policy. Several academic studies suggest that if China\’s birthrates had just declined in the same way as in other countries that had similar birthrates back in in 1970, China\’s population would have come out in at about the same level.   

I know that I am naive and foolish in the ways of public relations. But I learned long ago when a government is faced with an embarrassing admission, a politically useful approach can be to declare victory and move ahead. Thus, it seems obvious to me that from a public relations point of view, China\’s government should declare victory in its long-standing population-control campaign, point to the level or declining population level as evidence of the great success, and then say the time has come when such constraints are no longer needed–and in fact that higher birthrates can now be welcomed because of the sacrifices in the past. But attempts to deny that China\’s population either has peaked, or is about to peak, will need to fly in the face of both evidence and demographic logic. 

Spring 2021 Journal of Economic Perspectives Available Online

I am now in my 35th year as Managing Editor of the Journal of Economic Perspectives. The JEP is published by the American Economic Association, which decided about a decade ago–to my delight–that the journal would be freely available on-line, from the current issue all the way back to the first issue. You can download individual articles or the entire issue, and it is available in various e-reader formats, too. Here, I\’ll start with the Table of Contents for the just-released Spring 2021 issue, which in the Taylor household is known as issue #136. Below that are abstracts and direct links for all of the papers. I will probably blog more specifically about some of the papers in the next week or two, as well.

______________________________________

Symposium on the European Union
\”The Resilience of the Euro,\” by Philip R. Lane
Over 2014–2019, the euro area charted a substantial post-crisis economic recovery while also reducing macro-financial vulnerabilities. The array of post-crisis institutional reforms has improved the capacity of the euro area to withstand adverse shocks, even if the narrowing of imbalances also came at a high cost (especially in the most indebted member countries). The pandemic has provided a new test: the combination of a common central bank and the enlargement of the common fiscal capacity has provided substantial policy support and fostered a narrowing in risk premia, despite significant differences in levels of public debt and exposures to the pandemic shock. While the resilience of the euro is sure to be tested further in the coming years, the extent of the underlying political backing for the common currency should not be underestimated.
Full-Text Access | Supplementary Materials

\”The United States of Europe: A Gravity Model Evaluation of the Four Freedoms,\” by Keith Head and Thierry Mayer
One of the pillars of the 1957 Treaty of Rome that ultimately led to the European Union is the commitment to the four freedoms of movement (goods, services, persons, and capital). Over the following decades, as the members expanded in numbers, they also sought to deepen the integration amongst themselves in all four dimensions. This paper estimates the success of these policies based primarily on a gravity framework. Distinct from past evaluations, we augment the traditional equation for international flows with the corresponding intra-national flows, permitting us to distinguish welfare-improving reductions in frictions from Fortress-Europe effects. We complement the gravity approach by measuring the extent of price convergence. We compare both quantity and price assessments of free movement with corresponding estimates for the 50 American states.
Full-Text Access | Supplementary Materials

\”Migration and Labor Market Integration in Europe,\” by David Dorn and Josef Zweimüller
The European labor market allows for the border-free mobility of workers across 31 countries that cover most of the continent\’s population. However, rates of migration across European countries remain considerably lower than interstate migration in the United States, and spatial variation in terms of unemployment or income levels is larger. We document patterns of migration in Europe, which include a sizable migration from east to west in the last twenty years. An analysis of worker-level microdata provides some evidence for an international convergence in wage rates and for modest static gains from migration. We conclude by discussing obstacles to migration that reduce the potential for further labor market integration in Europe.
Full-Text Access | Supplementary Materials

\”Fiscal Policy in Europe: Controversies over Rules, Mutual Insurance, and Centralization,\” by Florin Bilbiie, Tommaso Monacelli and Roberto Perotti
We discuss the main fiscal policy issues in Europe, focusing on two that are at the core of the current debate. The first is that the government deficit and debt were, from the outset, the key objects of contention in the debate that led to the creation of the Eurozone, and they still are. The second issue is that a currency union implies the loss of a country-specific instrument, a national monetary policy. This puts a higher burden on fiscal policy as a tool to counteract shocks, a burden that might be even heavier now that the European Central Bank has arguably reached the Zero Lower Bound. Two obvious solutions are mutual insurance (or risk-sharing) amongst countries and a centralized stabilization policy. Yet both have been remarkably difficult to come by, especially due to political constraints. We review and discuss the relative merits of several proposals for increased insurance or centralization, or both. We conclude with an early discussion of the implications of the COVID-19 crisis for European fiscal policy reform and an assessment of the current fiscal measures.
Full-Text Access | Supplementary Materials

Symposium on Preventive Medicine

\”An Ounce of Prevention,\” by Joseph P. Newhouse
I look at prevention through an economic lens and make three main points. First, those advocating preventive measures are often asked how much money a given measure saves. This question is misguided. Rather, preventive measures can be thought of as insurance, with a certain cost in the present that may or may not pay off in the future. In fact, although most medical preventive measures improve expected health, they do not save money. Various lifestyle and early childhood interventions, however, may both save money and improve health. Second, preventive measures, including medical and lifestyle measures, are heterogeneous in their value, both across measures and within measure, across individuals. As a result, generalizations in everyday discourse about the value of prevention can be overly broad. Third, health insurance coverage for medical preventive measures should generally be more extensive than coverage for the treatment of a medical condition, though full coverage of preventive services is not necessarily optimal.
Full-Text Access | Supplementary Materials

\”Mammograms and Mortality: How Has the Evidence Evolved?\” Amanda E. Kowalski
Decades of evidence reveal a complicated relationship between mammograms and mortality. Mammograms may detect deadly cancers early, but they may also lead to the diagnosis and potentially fatal treatment of cancers that would never progress to cause symptoms. I provide a brief history of the evidence on mammograms and mortality, focusing on evidence from clinical trials, and I discuss how this evidence informs mammography guidelines. I then explore the evolution of all-cause mortality relative to breast cancer mortality within an influential clinical trial. I conclude with some responses to the evolving evidence.
Full-Text Access | Supplementary Materials

Articles

\”LGBTQ Economics,\” by M. V. Lee Badgett, Christopher S. Carpenter and Dario Sansone
Public attitudes and policies toward LGBTQ individuals have improved substantially in recent decades. Economists are actively shaping the discourse around these policies and contributing to our understanding of the economic lives of LGBTQ individuals. In this paper, we present the most up-to-date estimates of the size, location, demographic characteristics, and family structures of LGBTQ individuals in the United States. We describe an emerging literature on the effects of legal access to same-sex marriage on family and socioeconomic outcomes. We also summarize what is known about the size, direction, and sources of wage differentials related to variation in sexual orientation and gender identity. We conclude by describing a range of open questions in LGBTQ economics.
Full-Text Access | Supplementary Materials

\”The Ways of Corruption in Infrastructure: Lessons from the Odebrecht Case,\” by Nicolás Campos, Eduardo Engel, Ronald D. Fischer and Alexander Galetovic
In 2016, the Brazilian construction firm Odebrecht was fined 2.6 billion USD by the US Department of Justice. It was the largest corruption case ever prosecuted under the US Foreign Corrupt Practices Act. Our examination of judicial documents and media reports on this case provides new insights on the workings of corruption in the infrastructure sector. Odebrecht paid bribes for two reasons: to tailor the terms of the auction in its favor, as well as to obtain favorable terms in contract renegotiations. In projects where Odebrecht paid bribes, costs increased by 70.8 percent on average, compared with 5.6 percent for projects with no bribes. We also find that bribes and profits made from bribing were smaller than documented in most previous studies, in the range of one to two percent of the cost of a project.
Full-Text Access | Supplementary Materials

\”The Rise of Research Teams: Benefits and Costs in Economics,\” by Benjamin F. Jones
Economics research is increasingly performed in teams, and team-authored work has a large and increasing impact advantage. This article considers the benefits and costs of this \”rise of teams.\” Among its benefits, teamwork allows individuals to aggregate knowledge in productive and novel ways. For example, as knowledge accumulates over time, individuals become narrower in their expertise, and teamwork is a natural organizational approach to aggregating expertise and maintaining one\’s reach. But teamwork also brings costs. For example, teamwork divides and obscures credit, which is central to the reward system of science. By clouding credit assignment, teamwork can undermine individual career progression and exacerbate issues of bias. In addressing the rise of teamwork, this paper further considers institutional innovations, especially those inspired by the hard sciences, that can help limit the costs teamwork imposes while realizing the benefits.
Full-Text Access | Supplementary Materials

\”Recommendations for Further Reading,\” by Timothy Taylor
Full-Text Access | Supplementary Materials

Electrification of Everything: The Transmission Lines Challenge

Reducing carbon emissions will require a number of different intertwined steps, and  one  of them is commonly being referred to as \”electrification of everything.\” Basically, the more that society can rely for its energy needs on electricity generated from low-carbon or carbon-free methods, the more it can turn away from  burning fossil fuels. In turn, this policy agenda will require a vast expansion of high-voltage electricity transmission lines, especially if a substantial share of that electricity is generated from solar and wind. If there is more reliance on intermittent sources of electricity, there is also more need to ship electricity from place to place–more need for what is sometimes called a National Supergrid.  

However, the prospect of doubling the number of long-distance electricity transmission lines (or perhaps more than doubling) poses a classic problem of political economy. Under current law, decisions about allowing pathways for high-voltage lines are typically made at the state- or county-level. Local decision-makers don\’t have much incentive to take into account the broad social benefits of a widespread network of electricity lines that cross state and county boundaries. Thus, it\’s become politically very hard to expand the existing network. An obvious possible answer is to give more power to a federal-level authority to grant permissions. But where this has been done–say, in building and updating natural gas pipelines–the process has often proven to be highly controversial and has not always results in pipelines being built. 

Liza Reed provides an overview of these issues in \”Transmission Stalled: Siting Challenges for Interregional Transmission\” (Niskanen Center April 2021). She writes (footnotes omitted): 

The electricity sector is expected to change dramatically to meet decarbonization goals, with some pathways showing demand doubling or more as cars, households, and industry are increasingly electrified. This will require similar expansion in transmission capacity to serve increasing demand. …

Under the current system of planning and permitting, high-voltage interstate transmission lines take eight to ten years on average to complete, if they succeed at all. Four years or more of that timeline is absorbed by the regulatory hurdles, particularly siting the lines and acquiring the permits and land rights to build.

Transmission lines that traverse multiple states must satisfy the requirements of all states along a planned route. The timelines for each state are different, as are the standards each state uses for the evaluation of public convenience and necessity. Some states require a developer to be a recognized utility provider within the state, an arguably anachronistic requirement. In other states the siting process is handled at the county level, placing an even higher regulatory burden on transmission developers. High-voltage transmission lines often provide their highest overall value to the system as a whole, and may only provide modest benefits to a particular state. It can be difficult or impossible for developers of these lines to convince multiple states that the benefits are enough. Oftentimes developers choose not to pursue these projects at all. The national transmission system, which could be the backbone of our electricity system and decarbonization efforts, suffers as a result.

A 2016 review of transmission projects by the Lawrence Berkeley National Lab identified permitting as one of the top four factors affecting transmission projects.  Referring to multi-state siting and permitting, the report notes: “Regardless of which process concludes first, the process that concludes last determines when construction can be completed.”

Reed discusses multiple attempts to extend transmission lines so that electricity can be shipped across states (including wind energy) that have been blocked for years by state/local politics and court decisions. She also points out that incumbent energy firms may not favor expansions of long-distance transmission lines, either, because it raises the level of competition they face from electricity generated in other locations. Local regional planners may also want to support local sources of energy, and thus oppose closer ties to energy generated outside  their region. 

There have been some attempts to designate \”national interest electricity transmission corridors (NIETCs)\” where it would be faster and easier to get permission to build additional electricity transmission lines, but these have also been contested and blocked by local authorities and courts. 

I\’ve got no magic solution here. Local control tends to block the needed national expansion. Moving authority to the federal level, like the Federal Energy Regulatory Commission, would in some cases inevitably make decisions opposed to local desires–indeed, the reason for putting greater authority at the federal level would be to override local desires in some cases. Reed provides an honest discussion of some problems that have come up in the case of FERC having greater power to speed the permitting process for natural gas pipelines.

Natural gas pipeline infrastructure does not face the same siting challenges. The Natural Gas Act grants siting authority to FERC for interstate natural gas pipelines. The average permitting time is 18 months, less than half of the average interstate transmission permitting time. This single, central authority, in which FERC sites and permits lines and coordinates environmental reviews, is why the United States was able to respond quickly to the shale gas boom. …

When considering reforms for transmission infrastructure, policy makers should consider how expansive FERC siting authority under the Natural Gas Act has disadvantaged private citizens and landowners. In practice, FERC provides limited notice of landowners’ rights, limited notice of applications for natural gas lines, and little meaningful access for impacted landowners. FERC delegates its statutory and constitutional obligations to provide notice to landowners to pipeline companies, and fails to confirm that such notice was actually provided. …  Indeed, FERC establishes ad hoc timelines rather than a fixed time for intervention, and there are examples of FERC providing landowners with at least three inconsistent and contradictory sets of instructions for intervening. This has resulted in landowners being given as little as 13 days to intervene in proceedings whose purpose is to take their property. Though the practice has been recently rejected by the U.S. Court of Appeals for the D.C. Circuit, FERC has a long history of indefinitely delaying landowner rehearings (and thereby delaying landowners’ access to judicial review) by what are colloquially known as “tolling orders,” which prevented landowners from challenging FERC’s decision.

FERC’s record reveals other problems, too. FERC can issue “conditioned certificates” allowing eminent domain, even though the pipeline in question has not, and may never, obtain other required permits. With a FERC certificate in hand, courts currently will grant pipelines so-called quick-take possession of property, whereby a company takes land prior to remuneration, removing an incentive for the company to reimburse landowners on a reasonable timeline. FERC also establishes conditions on how companies construct pipelines and protect the remainder of landowners’ property, but the agency consistently fails to respond to any landowner complaints regarding violations. These practices allow for takings and destruction of private land in absence of oversight and without a fully permitted project. What’s more, if the project never gets built, or a court finds that the certificate was invalid, the pipeline company gets to keep the easements obtained from landowners, including all perpetual land use restrictions, however irrelevant in the absence of a pipeline.

Again,  I have no magic solution to balance the competing interests here. But I will say that if you are a strong proponent of solar and wind power, basic consistency requires that you also need to favor a vast and well-coordinated expansion of long-distance electricity transmission lines, with the associated commitments of physical resources and land, as well as sometimes needing to override local interests.  As Reed writes: 

Recent studies from MIT, Princeton, and NREL [National Renewable Energy Laboratory] demonstrate that interstate lines and interregional coordination are critical to achieving a cost-effective grid. Clear and consistent rules and metrics, which can only come from a single governing agency, would allow transmission developers, utilities, and generators to unlock the clean energy resources available across the nation.

For some other recent posts about of the future of US electricity generation and transmission, see: 

Amazon and Value Creation: A Bezos Farewell

Jeff Bezos is stepping down from daily management tasks as chief executive officer of Amazon, the company he founded in 1994, although he will continue to be involved in the company as executive chairman of the board. Earlier this month, Bezos wrote his last annual letter to company shareholders. A main focus of the letter is on how Amazon creates \”value.\” 

 

Of course, for economists one measure of value is the total value of Amazon\’s stock, which now stands at about $1.6 trillion (and Bezos owns about one-eighth of that). But his letter focuses on the most recent year. He writes: 

Last year, we hired 500,000 employees and now directly employ 1.3 million people around the world. We have more than 200 million Prime members worldwide. More than 1.9 million small and medium-sized businesses sell in our store, and they make up close to 60% of our retail sales. Customers have connected more than 100 million smart home devices to Alexa. Amazon Web Services serves millions of customers and ended 2020 with a $50 billion annualized run rate.

During 2020, Amazon had net income of $21.3 billion. Bezos adds: 

In 2020, employees earned $80 billion, plus another $11 billion to include benefits and various payroll taxes, for a total of $91 billion.

How about third-party sellers? We have an internal team (the Selling Partner Services team) that works to answer that question. They estimate that, in 2020, third-party seller profits from selling on Amazon were between $25 billion and $39 billion, and to be conservative here I’ll go with $25 billion. …

Customers complete 28% of purchases on Amazon in three minutes or less, and half of all purchases are finished in less than 15 minutes. Compare that to the typical shopping trip to a physical store – driving, parking, searching store aisles, waiting in the checkout line, finding your car, and driving home. Research suggests the typical physical store trip takes about an hour. If you assume that a typical Amazon purchase takes 15 minutes and that it saves you a couple of trips to a physical store a week, that’s more than 75 hours a year saved. That’s important. We’re all busy in the early 21st century. So that we can get a dollar figure, let’s value the time savings at $10 per hour, which is conservative. Seventy-five hours multiplied by $10 an hour and subtracting the cost of Prime gives you value creation for each Prime member of about $630. We have 200 million Prime members, for a total in 2020 of $126 billion of value creation. …

AWS [Amazon Web Services] is challenging to estimate because each customer’s workload is so different, but we’ll do it anyway, acknowledging up front that the error bars are high. Direct cost improvements from operating in the cloud versus on premises vary, but a reasonable estimate is 30%. Across AWS’s entire 2020 revenue of $45 billion, that 30% would imply customer value creation of $19 billion (what would have cost them $64 billion on their own cost $45 billion from AWS). The difficult part of this estimation exercise is that the direct cost reduction is the smallest portion of the customer benefit of moving to the cloud. The bigger benefit is the increased speed of software development – something that can significantly improve the customer’s competitiveness and top line. We have no reasonable way of estimating that portion of customer value except to say that it’s almost certainly larger than the direct cost savings. To be conservative here (and remembering we’re really only trying to get ballpark estimates), I’ll say it’s the same and call AWS customer value creation $38 billion in 2020.

I\’m sure one can tinker with these estimates in a variety of ways, and combining wages paid to employees with time saved by consumers will represent conceptually different categories of \”value.\” One could also expand this list in various ways: for example, there is value to consumers (especially consumers who may not live close to lots of other retail options) in the extreme variety of products readily available via Amazon. 

But my goal here is not to fine-tune the estimates, but to make a general point here worth noticing. The value of Amazon\’s profits in a given year is much, much less than the value created by the company in other ways: wages, facilitating sales by third-part firms, time savings for consumers, and so on. 
These gains didn\’t just happen.  Building an interactive website that works at large scale is a monumental task. As one counterexample among many, think about the issues that arose when trying to build websites for buying health insurance in the aftermath of the Patient Protection and Affordable Care Act of 2010, or think about the computer network problems of the Internal Revenue Service. Yes, it\’s plausible that if Bezos had never started Amazon, some other company would have emerged from the dot-com scrum of the late 1990s. However, Bezos led the company that actually did it. Whether you are a fan or detractor of Amazon, the sheer size and scope of what has been built  commands attention. 
Of course, when at top executive at a big company is writing to shareholders, the emphasis tends to be on the good news. I never want to deify any company. There are lots of tough real-world questions about Amazon: How well does the firm treat its workers? How is the firm using data collected from customers and searches? Has the firm taken advantage of its platform not just to act as a tough competitor, but also to block competition from others? How is Amazon, both domestically and abroad, interacting with the US corporate tax code? What have been the tradeoffs of Amazon\’s success for bricks-and-mortar retailers? 
But asking reasonable questions is different from being a doomsayer. Especially during the pandemic, Amazon has made my life easier. As one example, I\’m a person who has a visceral need for new reading material. The ability during the pandemic to \”go to\” the local public library online, 24/7, and download books to my Kindle e-reader has saved me money and helped keep me sane. 

Nine Principles of Policing from 1829

In the early 19th century, various cities in Scotland and Ireland had established their own police forces. Sir Robert Peel is typically credited with the lead role in bringing a police force to London via the passage of the Metropolitan Police Act of 1829. Indeed, the early London police were often called \”peelers.\” 
Either Peel or the early commissions of the London Police Force wrote down nine principles of policing, which have been fairly well-known since then to police everywhere. Here are \”9 Policing Principles\” are as listed at the website of the Law Enforcement Action Partnership

  1. To prevent crime and disorder, as an alternative to their repression by military force and severity of legal punishment.
  2. To recognize always that the power of the police to fulfill their functions and duties is dependent on public approval of their existence, actions and behavior, and on their ability to secure and maintain public respect.
  3. To recognize always that to secure and maintain the respect and approval of the public means also the securing of the willing cooperation of the public in the task of securing observance of laws.
  4. To recognize always that the extent to which the cooperation of the public can be secured diminishes proportionately the necessity of the use of physical force and compulsion for achieving police objectives.
  5. To seek and preserve public favor, not by pandering to public opinion, but by constantly demonstrating absolute impartial service to law, in complete independence of policy, and without regard to the justice or injustice of the substance of individual laws, by ready offering of individual service and friendship to all members of the public without regard to their wealth or social standing, by ready exercise of courtesy and friendly good humor, and by ready offering of individual sacrifice in protecting and preserving life.
  6. To use physical force only when the exercise of persuasion, advice and warning is found to be insufficient to obtain public cooperation to an extent necessary to secure observance of law or to restore order, and to use only the minimum degree of physical force which is necessary on any particular occasion for achieving a police objective.
  7. To maintain at all times a relationship with the public that gives reality to the historic tradition that the police are the public and that the public are the police, the police being only members of the public who are paid to give full-time attention to duties which are incumbent on every citizen in the interests of community welfare and existence.
  8. To recognize always the need for strict adherence to police-executive functions, and to refrain from even seeming to usurp the powers of the judiciary of avenging individuals or the State, and of authoritatively judging guilt and punishing the guilty.
  9. To recognize always that the test of police efficiency is the absence of crime and disorder, and not the visible evidence of police action in dealing with them.
I am fully aware that it\’s not 1829 anymore. But as one looks at the struggles of police forces across the country, it feels like time to restore and revivify the spirit behind a number of these principles. 

Stigler’s Economic Theory of Regulation: The Semicentennial

I\’ve found that the word \”regulation\” is a sort of Rorschach test on which many people project their broader political beliefs. Some are deeply suspicious of any proposals that can be characterized as \”deregulation,\” and predisposed to favor \”regulation\” even before knowing the details of the proposal. These people tend to begin with a belief that private market actors are almost also pushing up to and beyond the edge of what is good for society, and thus comfortable with a presumption that government pushback in the form or regulation may help.  Indeed, for the first three-quarters of the 20th century, during the rise of US regulatory agencies from near-zero to high prominence, this group was preeminent in how most academics and policy-makers thought about regulation. 

On the other side, another group is deeply suspicious of regulations, because they mistrust the ability of government both to diagnose problems with a market economy or to design solutions. Instead, they fear that government regulations often end up either supporting or offering loopholes for politically powerful interest groups. The patron saint of this second group is George J. Stigler, who 50 years ago published published \”The Theory of Economic Regulation\” in the Bell Journal of Economics and Management Science (Spring 1971, 2:1, pp. 3-21, available via JSTOR and other places on the web). Stigler later won the 1982 Nobel prize \”for his seminal studies of industrial structures, functioning of markets and causes and effects of public regulation.\”

In the 1971 essay, Stigler made the case for what is now called \”regulatory capture.\” Imagine that the government is thinking about passing a certain set of regulations, and about an ongoing agency to enforce and interpret these regulation. Then ask yourself: Who has the most incentive to spend large amounts of money, time and attention focused on every twist and turn, every subclause and comma, of these regulations? And to sustain this focus day after day, year after year? Stigler pointed out that politics and behind-the-scenes lobbying will play a big role in this process. Over time, the industries directly affected by regulation will have a strong incentive to play a prominent role in shaping regulations. Stigler writes: \”A central thesis of this paper is that, as a rule, regulation is acquired by the industry and is designed and operated primarily for its benefit.\”

For those who would like a thorough review of the arguments for and against this theory, the Stigler Center for the Study of the Economy and the State at the University of Chicago held a webinar on the occasion of the 50th anniversary of Stigler\’s essay this last week, and four hours of video of the discussions are available.  Several of the participants have also published short essays available at the link. Here, I\’ll offer a brief sketch of the state of the argument. 

1) Stigler had a point. 

Stigler\’s 1971 essay offers lots of examples in which it seems plausible that regulation was being used by incumbents to stifle competition and thus to improve their own profits. As he points out, the Civil Aeronautics Board which at the time set prices for all airline flights and decided which flights would be allowed did had \”not allowed a single new trunk line to be launched since it was created in 1938.\” He cited studies that the Federal Deposit Insurance Corporation had \”reduce[d] the rate of entry into commercial banking by 60 percent.\” Federal regulation of trucking led to a situation in which the number of licensed carriers was declining over time, despite thousands of annual applications for certificates to license additional truckers. Stigler wrote: \”We propose the general hypothesis: every industry or occupation that has enough political power to utilize the state will seek to control entry.\”

The deregulation wave of the late 1970s and 1980s affected airlines, trucking and banking. But other examples mentioned by Stigler live on. Government regulations, typically at the state level, require licenses for about one-fourth of all US jobs. A well-developed body of evidence (often looking at what is regulated or unregulated across states) suggests that in many cases, these regulations are less about protecting the public than about limiting competition. It seems likely that the building trade unions have used building codes to hinder new cost-saving technologies, including factory-built homes.  Government regulations in education give large advantages to established colleges and universities, and to the public K-12 schools, while limiting entry of new competition. Rules to limit certain imports of goods from abroad are typically driven by domestic industries concerned about foreign competition.

Indeed, every time a supporter of government regulation bewails how a special interest has invaded the process and caused a desired regulation to be blocked or diluted, they are in effect channeling their inner Stigler-style skepticism about the practical reality of the regulatory process.  Are supporters of added government regulation at all surprised that big health insurance companies lobbied for the Patient Protection and Affordable Care Act of 2010 and benefited after its passage? Or that in the aftermath of new regulations to reduce the risk of government bank bailouts, big banks gained market share at the expense of smaller institutions? 

Whatever the actual shortcomings of the private sector, and whatever the theoretical case for corrective government regulation, Stigler-style skepticism offers a useful corrective about what regulations are actually enacted. As Filippo Maria Lancieri and Luigi Zingales write in their short essay accompanying the Stigler Center symposium: 

The 1887 Interstate Commerce Commission was the first government agency to regulate an important sector of the US economy. By the 1900’s, there were 10 federal agencies, employing 15,000 workers. By 2019, the number of agencies rose to 117, employing 1.4 million workers. The 20th century could easily be labeled the century of regulation. …  Most likely, the 20th century would have also ended as the century of regulation if it were not for George Stigler.

2) Stigler probably overemphasized the role of pure regulator capture, as opposed to regulatory problems created by poor information or ideological bias. 

In Andrei Shleifer\’s keynote address for the Stigler Center symposium, he makes the point that there can be lots of reasons why regulations have mixed or negative effects. It\’s not all about regulatory capture by the industry affected. As a recent example, Shleifer points to the waves of rules and regulations that have become prominent in COVID-19 pandemic. For example, there have been rules about masks, social distancing, and lockdowns. There were regulations about what kinds of COVID-19 tests could be sold or used. There were rules about what constituted adequate testing of vaccines. There were questions over whether to shut down the use of the Johnson & Johnson vaccine over the possibility of a heightened risk of blood clots. 

Shleifer\’s point is not to argue for or against specific rules, but just to point out that, in general, the differences of opinion on these rules were shaped by available knowledge (and ignorance), and by beliefs about what risks were acceptable (or not), and what messages the public was ready to hear (or not). In these regulations and probably in many others as well, the key dividing lines are more about issues of knowledge and ideology that about a Stigler-style regulatory capture scenario. Of course, this doesn\’t make Stigler\’s insights irrelevant, but it does suggest that his \”theory of regulation\” was focused only one one slice of the issues involved. 

3) Stigler\’s \”theory of regulation\” may overemphasize the potential for bad outcomes, to the extent that his 1971 article essentially fails to acknowledge potentially beneficial outcomes of regulation. 

Cass Sunstein makes this argument in his keynote address for the second day of the Stigler Center symposium, and also in a short article written to accompany the event.  He points to a number of specific regulations: for example, rules requiring accessibility to public areas for those with disabilities; or rules specifying the rights that airline passengers have when a flight is overbooked; or the rules that now require rear-view cameras in all motor vehicles; or rules that require the Post Office to collect data on certain packages arriving from overseas as part of the effort to reduce imports of opioids. With these and many other examples in mind, Sunstein writes: 

Stigler offered, but did not adequately defend, the proposition that “as a rule, regulation is acquired by the industry and is designed and operated primarily for its benefit.” That proposition is false. As a rule, regulation is not acquired by the industry, and it is not designed and operated for its benefit (primarily or otherwise). … Surely there are such examples, but they are not “the general rule.” I conclude that the success and influence of Stigler’s argument owed a great deal, not to its accuracy, but to its iconoclasm, its sense of knowingness, its smarter-than-thou, cooler-than-thou cynicism (appealing to many), its mischieviousness, and its partial (!) truth.

Sunstein is fully aware of the politics behind regulation. But rather than defaulting to the conclusion that all regulations are captured by industry, he suggests instead a focus on why regulators hold the beliefs that they do. When asking whether regulators are right or wrong, Sunstein writes: 

But why, exactly, do they believe such things? There are two main answers. The first involves the information they receive: What do they learn, and from whom do they learn it? In some cases, “the industry” is relevant; in other cases, journalists matter; political parties, public interest groups, think tanks, and academics might matter as well. Some regulators live in echo chambers; others do not. In many cases, we might well be able to speak of “epistemic capture,” which occurs not when regulators are literally pressured (threatened or promised), but when what they believe to be true is only a subset of the truth, or not true at all. The second main answer involves the motivations of the regulators themselves. What do they want to believe, and what do they want to dismiss? …  Understanding what people end up hearing and crediting, and also what they want to hear and credit, would enable us to make real progress in specifying the mechanisms that lead to regulation.

As Lancieri and Zingales wrote in their essay, \”Stigler’s enduring legacy was opening the door for the political analysis of regulation.\” Perhaps today it seems obvious to everyone that political analysis of regulation is a useful and important task. But it wasn\’t obvious to everyone a half century ago. 

International Trade and Economic Disruption in Context

Paul Romer (Nobel \’18) offered a pithy aphorism  a few years ago: \”Everyone wants progress. Nobody wants change.\” But of course, the process of economic progress is inevitably lumpy. It doesn\’t smoothly affect everyone in the same way. International trade is one part of the process of economic progress, but nobody wants change. Adam S. Posen pushed back against the resulting dynamic in \”The Price of Nostalgia: America’s Self-Defeating Economic Retreat\” (Foreign Affairs, May/June 2021). He writes: 

There is a popular notion that the United States has been sacrificing justice in the name of economic efficiency, and so it is time to correct the imbalance by stepping back from globalization. This is a largely false narrative. The United States has been withdrawing from the world economy for 20 years, and for most of that time, U.S. economic dynamism has been falling, and inequality in the country has risen more than it has in economies that were opening up. Workers are less mobile. Fewer businesses have been started. Corporate power has grown more concentrated. Innovation has slowed. Although many factors have contributed to this decline, it has likely been reinforced by the United States’ retreat from global economic exposure. 

There\’s a lot to reflect on in the article, and there are parts of it I would agree with more than others. Here, I want to emphasize two of the points that Posen makes. 

One is that there are a wide variety of reasons for economic disruption and job loss: new technology, domestic competition, lousy management, shifts in consumer preferences for goods and services, and many more. For a huge and well-diversified economy like the United States, with its enormous internal market, the disruptions related to international trade are a relatively small part of the picture. Posen writes: 

After much debate, economists have agreed on an upper-bound estimate of the number of U.S. manufacturing jobs that were lost as a result of Chinese competition after 1999: two million, at most, out of a workforce of 150 million. In other words, from 2000 to 2015, the China shock was responsible for displacing roughly 130,000 workers a year. That amounts to a sliver of the average churn in the U.S. labor market, where about 60 million job separations typically take place each year. Although approximately a third of those total job separations are voluntary in an average year, and others are due to individual circumstances, at least 20 million a year are due to business closures, restructurings, or employers moving locations. Think of the flight of jobs from inner cities or the displacement of secretarial and office workers due to technology—losses that, for the workers affected, are no different in terms of local impact and finality than the manufacturing job losses resulting from foreign competition. In other words, for each manufacturing job lost to Chinese competition, there were roughly 150 jobs lost to similar-feeling shocks in other industries. But these displaced workers got less than a hundredth of the public mourning.

An American who loses his job to Chinese competition is no more or less deserving of support than one who loses his job to automation or the relocation of a plant to another state. Many jobs are unsteady. The disproportionate outcry about the effect of Chinese trade ignores the experiences of the many more lower-wage workers who experience ongoing churn, and it forgets the way that previous generations of workers were able to adapt when they lost their jobs to foreign competition.

 The other point is that the rest of the world is going ahead with globalization.  Around the rest of the  world, export/GDP ratios have generally bounced back after the decline during the Great Recession, but not in the US.  Meanwhile, countries within the European Union are extremely open to trade with each other, and becoming more so.  The European Union has expanded by 13 countries since 2000, and is signing trade agreements with the rest of the world. China is maintaining high involvement with the rest of the global economy, too. 

It\’s grimly amusing to me that many Americans who are quite supportive of social democratic policies in Scandinavian and other European countries often don\’t seem to agree with those countries when it comes to the importance of open trade. Posen writes: 

 Since World War II, the United States has approached international economic integration as something it encouraged others to do. Trade deals were framed as being about foreign countries opening their markets and reforming their economies through competition. For a long time, this narrative was largely true. It had the unfortunate effect domestically, however, of characterizing the United States as open and the rest of the world as protectionist. The competition that U.S. firms faced from abroad was seen as the result of unfair trade. Those perceptions have now outlasted the reality. It is the United States that needs foreign pressure and inspiration.

An Economics with Verbs, Not Just Nouns

W. Brian Arthur re-opens some old questions about the discipline of economics and the role of mathematics with fresh language iu \”Economics in Nouns and Verbs\” (April 5, 2021, preprint at arXiv). Here\’s a flavor of his argument:

I will argue that economics, as expressed via algebraic mathematics, deals only in nouns—quantifiable nouns—and does not deal with verbs (actions), and that this has deep consequences for what we see in the economy and how we theorize about it. …

Let me begin by pointing out that economics deals with prices, quantities produced, consumption, rates of interest, rates of exchange, rates of inflation, unemployment levels, trade surpluses, GDP, financial assets, Gini coefficients. These are all nouns. In fact, they are all quantifiable nouns—amounts of things, levels of things, rates of things. Economics as it is formally expressed is about amounts and levels and rates, and little else. This  statement seems simple, trite almost, but it is only obvious when you pause to notice it. Nouns are the water economics swims in.

Of course in the real economy there are actions. Investors, producers, banks, and consumers act and interact incessantly. They trade, explore, forecast, buy, sell, ponder, adapt, invent, bring new products into being, start companies. And these of course are actions—verbs. Parts of economics—economic history, or business reporting—do deal with actions. But in formal discourse about the economy, in the theory we learn and the models we create and the statistics we report, we deal not with verbs but with nouns. If companies are indeed started, economic models reflect this as the number of companies started. If people invest, models reflect this as the amount of investment. If central banks intervene, they reflect this by the quantity of intervention. Formal economics is about nouns and reduces all activities to nouns.

You could say that is its mode of understanding, its vocabulary of expression. Perhaps this is just a curiosity and doesn’t matter. And maybe it’s necessary that to be a science economics needs to deal with quantifiable objects—nouns. But other sciences heavily use verbs and actions. In biology DNA replicates itself, corrects copying errors in its strands, splits apart, and transfers information to RNA to express genes. These are verbs, all. Biology—modern molecular biology, genomics, and proteomics—is based squarely on actions. Indeed biology would be hard to imagine without actions—events triggering events, events inhibiting events. … 

Any economist will have some immediate reactions here, several of which are anticipated by Arthur. 

The reader may object that mathematics in economics does use verbs: agents maximize; they learn and adapt; rank preferences; decide among alternatives; adjust supply to meet demand. But the verbs here are an illusion; algebraic mathematics doesn’t allow them, so they are quickly finessed into noun quantities. We don’t actually see agents maximizing in the theory; we see the necessary conditions of their maximizing expressed as noun-equations. We don’t see them learning; we see some parameter changing in value as they update their beliefs. We don’t see producers deciding levels of production; we see quantities determined via some optimizing rule. We don’t see producers responding to demand via manufacturing actions, we see quantities adjusting. It might appear that dynamics in the economy are an exception—surely they must contain verbs. But expressed in differential equations, they too are just changes in noun quantities. Verbs in equation-based theory require workarounds.

It should be said that Arthur has a point. The equations in economics often have \”black box\” component–that is, you can\’t see what\’s going on inside. For example, firms in a standard economic model have a \”production function,\” which shows that when certain levels of inputs are used, certain levels of outputs emerge. But on the subject of how production actually works, the equation is silent. 
More important, on the subject of how the process of production interacts with new technologies for production and offering new products to consumers, the production function is also silent. When talking about issues like the underlying causes of productivity growth in an economy, or issues like economic development of low- and middle-income countries, these black box production functions (at least in their basic versions) don\’t capture what\’s actually happening. 
So what is Arthur\’s alternative? As an economic theorist with extensive mathematical training, he suggests that economists open themselves up to an alternative kinds of math–the mathematics of algorithms. He writes: 

The reason algorithms handle processes well is because each individual instruction, each step, can signal an action or process. Also, and here is where process enters par excellence, they allow if-then conditions. If process R has been executed 100 times, then execute process L; if not, then execute process H. Algorithms can contain processes that call or trigger other processes, inhibit other processes, are nested within processes, indeed create other processes. And so they provide a natural language for processes, much as algebra provides a natural language for noun-quantities. Frequently algorithms include equations, and so sometimes we can think of algorithmic systems as equation-based ones with if-then conditions. As such, algorithmic systems generalize equation-based ones, and they give us a new mode, a new language of expression in economics, although one that may look different from what we’re used to. …

The world revealed here is not one of rational perfection, nor is it mechanistic. If anything it looks distinctly biological. Its agents are constantly acting and reacting within a situation—an “ecology” if you like—brought about by other agents acting and reacting. Algorithmic expression allows novel, unthought of behaviors, novel formations, structural change from within—it allows creation. It gives us a world alive, constantly creating and re-creating itself.

Arthur and others who work in \”complexity economics\” have been creating and working with these kinds of models for decades. As Arthur writes in this paper, such models can be viewed as simulations\” or \”laboratory experiments\”–that is, given certain starting points and behavioral rules, if you allow the algorithm to evolve many times, what kinds of outcomes are more or less likely to evolve? 

All this is fair enough. My own sense is that algorithmic methods can be especially useful in showing how seemingly mild and plausible rules can sometimes lead to unexpected and even disastrous outcomes, and how small changes in initial conditions or in underlying assumptions about behavior can lead to dramatic differences in how outcomes evolve. 

But that said, algorithms still involve reducing the real world to mathematical equations and still require specifying a set of assumptions. It\’s not obvious that algorithms do a better job of looking inside the \”black box\” of how production happens, or how it evolves to higher productivity and output, than conventional economic methods. 
Thus, by the end of the essay, Arthur seems to be backpedaling from grand claims. He writes: \”As a means of understanding, algorithmic expression need not replace equation-based expression in economics, but can take its place alongside it as a parallel language.\” He says: \”I do not believe algorithmic expression is a panacea in economics. It can include heterogenous agents with “non-rational” behavior that is context dependent, detailed and therefore more realistic. But it does not easily capture the `humanness\’ of economic life, its emotionality, its intuitive nature, its personages,
its very style. For this we would need other means.\” What starts off sounding like a frontal assault on the methods of economics ends up as a quiet plea for openness to an expanded set of mathematical tools.