Tips for Academic Writing from Cormac McCarthy

In 2007, Cormac McCarthy was the Pulitzer Prize Winner in Fiction for The Road. Little did I know that he was a fellow editor of academic writing Van Savage and Pamela Yeh provide the background in \”Novelist Cormac McCarthy’s tips on how to write a great science paper: The Pulitzer prizewinner shares his advice for pleasing readers, editors and yourself\” (Nature, September 26, 2019). They note: \”For the past two decades, Cormac McCarthy — whose ten novels include The Road, No Country for Old Men and Blood Meridian — has provided extensive editing to numerous faculty members and postdocs at the Santa Fe Institute (SFI) in New Mexico.\”

For a sense of the incongruity here, this is the book jacket copy for The Road, via the Pulitzer website: 

A father and his son walk alone through burned America. Nothing moves in the ravaged landscape save the ash on the wind. It is cold enough to crack stones, and when the snow falls it is gray. The sky is dark. Their destination is the coast, although they don\’t know what, if anything, awaits them there. They have nothing; just a pistol to defend themselves against the lawless bands that stalk the road, the clothes they are wearing, a cart of scavenged food–and each other.

The Road is the profoundly moving story of a journey. It boldly imagines a future in which no hope remains, but in which the father and his son, \”each the other\’s world entire,\” are sustained by love. Awesome in the totality of its vision, it is an unflinching meditation on the worst and the best that we are capable of: ultimate destructiveness, desperate tenacity, and the tenderness that keeps two people alive in the face of total devastation.

Doesn\’t sound exactly aligned with the cutting-edge scientific research across many fields on the themes of complexity, adaptation, and emergent properties that are emphasized at the Santa Fe Institute. But for what it\’s worth (and the Santa Fe Institute does have a healthy group of economists), here is how Savage and Yeh summarize are some bits of advice from McCarthy–with more bullet points at the linked article in Nature.

• Use minimalism to achieve clarity. While you are writing, ask yourself: is it possible to preserve my original message without that punctuation mark, that word, that sentence, that paragraph or that section? Remove extra words or commas whenever you can. … 

• Don’t slow the reader down. Avoid footnotes because they break the flow of thoughts and send your eyes darting back and forth while your hands are turning pages or clicking on links. Try to avoid jargon, buzzwords or overly technical language. And don’t use the same word repeatedly — it’s boring. … 

• And don’t worry too much about readers who want to find a way to argue about every tangential point and list all possible qualifications for every statement. Just enjoy writing. … 

• Inject questions and less-formal language to break up tone and maintain a friendly feeling. Colloquial expressions can be good for this, but they shouldn’t be too narrowly tied to a region. Similarly, use a personal tone because it can help to engage a reader. Impersonal, passive text doesn’t fool anyone into thinking you’re being objective: “Earth is the centre of this Solar System” isn’t any more objective or factual than “We are at the centre of our Solar System.” …

• After all this, send your work to the journal editors. Try not to think about the paper until the reviewers and editors come back with their own perspectives. When this happens, it’s often useful to heed Rudyard Kipling’s advice: “Trust yourself when all men doubt you, but make allowance for their doubting too.” Change text where useful, and where not, politely explain why you’re keeping your original formulation.

• And don’t rant to editors about the Oxford comma, the correct usage of ‘significantly’ or the choice of ‘that’ versus ‘which’. Journals set their own rules for style and sections. You won’t get exceptions.

The Health Costs of Global Air Pollution

The State of Global Air 2019 report notes:

Air pollution (ambient PM2.5, household, and ozone) is estimated to have contributed to about 4.9 million deaths (8.7% of all deaths globally) and 147 million years of healthy life lost (5.9% of all DALYs [disability-adjusted life years] globally) in 2017. The 10 countries with the highest mortality burden attributable to air pollution in 2017 were China (1.2 million), India (1.2 million), Pakistan (128,000), Indonesia (124,000), Bangladesh (123,000), Nigeria (114,000), the United States (108,000), Russia (99,000), Brazil (66,000), and the Philippines (64,000). 

Air pollution ranks fifth among global risk factors for mortality, exceeded only by behavioral and metabolic factors: poor diet, high blood pressure, tobacco exposure, and high blood sugar. It is the leading environmental risk factor, far surpassing other environmental risks that have often been the focus of public health measures in the past, such as unsafe water and lack of sanitation. … Air pollution collectively reduced life expectancy by 1 year and 8 months on average worldwide, a global impact rivaling that of smoking. This means a child born today will die 20 months sooner, on average, than would be expected in the absence of air pollution.

The report is written by the Health Effects Institute, a Boston-based think tank, and the Institute for Health Metrics and Evaluation, an independent health research center based at the University of Washington.  The report focuses on three aspects of air pollution, with occasional references to other measures: \”fine particle pollution— airborne particulate matter measuring less than 2.5 micrometers in aerodynamic diameter, commonly referred to as PM2.5; ground-level (tropospheric) ozone; and household air pollution that arises when \”people burn solid fuels (such as coal, wood, charcoal, dung, and other forms of biomass, like crop waste) to cook food and to heat and light their homes.\”

Here\’s a map showing patterns of particulate pollution around the world. Clearly, the severity of this issue is worst in a band running from Africa across the Middle East to south and east Asia. 

When it comes to ozone:

Most ground-level ozone pollution is produced by human activities (for example, industrial processes and transportation) that emit chemical precursors (principally, volatile organic compounds and nitrogen oxides) to the atmosphere, where they react in the presence of sunlight to form ozone. Exposure to ground-level ozone increases a person’s likelihood of dying from respiratory disease, specifically chronic obstructive pulmonary disease. …
The pattern of ozone exposures by level of sociodemographic development differs markedly from the patterns seen with PM2.5 and household air pollution. The more developed regions, like North America, also continue to experience high ozone exposures in the world, despite extensive and successful air quality control for ozone-related emissions in many of these countries.
On the topic of household air pollution:

In 2017, 3.6 billion people (47% of the global population) were exposed to household air pollution from the use of solid fuels for cooking. These exposures were most common in sub-Saharan Africa, South Asia, and East Asia … Figure 7 shows the 13 countries with populations over 50 million in which more than 10% of the population was exposed to household air pollution. Because these countries have such large populations, the number of people exposed can be substantial even if the proportion exposed is low. An estimated 846 million people in India (60% of the population) and 452 million people in China (32% of the population) were exposed to household air pollution in 2017. …

While the contribution of household air pollution to ambient air pollution varies by location and has not been calculated for most countries, one recent global estimate suggested that residential energy use, broadly defined, contributed approximately 21% of global ambient PM2.5 concentrations. Another study estimated that residential energy use contributed approximately 31% of global outdoor air pollution–related mortality. …

The world is making progress in some areas in reducing air pollution, like in China:

A separate analysis of air  quality and related health impacts in  74 Chinese cities recently found that annual average PM2.5 concentrations  fell by one-third from 2013–2017, a  significant achievement. The study also showed a 54% reduction in sulfur dioxide concentrations and a  28% drop in carbon monoxide.  However, challenges remain. In 2017, … approximately 852,000 deaths were attributable to PM2.5 exposures in China. Ozone  exposures have also remained largely untouched  by the actions taken in China to date, and the GBD [Global Burden of Disease] project attributed an additional  178,000 chronic respiratory disease–related  deaths in China in 2017 to ozone. 

In the US, steady progress has been made over several decades in reducing the \”criteria\” air pollutants– Carbon MonoxideLeadNitrogen DioxideOzoneParticulate Matter (PM10)Particulate Matter (PM2.5); and Sulfur Dioxide–although there has been some backsliding in the last couple of years.  Or when it comes to household air pollution: \”Globally, the proportion of households relying on solid fuels for cooking dropped from about 57% in 2005 to 47% in 2017.\”

But with progress duly noted, we\’re still talking about estimates of nearly 5 million deaths per year from air pollution, with over 100,000 of those deaths happening in the United States. Moreover, as the report notes: \”Air pollution takes its greatest toll on people age 50 and older, who suffer the highest burden from noncommunicable air pollution–related diseases such as heart disease, stroke, lung cancer, diabetes, and COPD [Chronic Obstructive Pulmonary Disease].\” An aging society is going to experience greater health costs from pollution.

It is perhaps worth noticing that none of the health care costs here are related to climate change, nor are these health costs of air pollution a few decades or a century into the future. However, it is often true that taking taking steps to reduce these conventional air pollutants will also have the effect of reducing carbon emissions. Instead of refighting the trench warfare of the climate change policy debates, over and over again,perhaps it would make sense to emphasize the value of steps to reduce these immediate health costs–and then just accept the benefits of fewer greenhouse gas emissions as a highly desirable side-benefit.

Interview with Emmanuel Farhi: Global Safe Assets and Macro as Aggregated Micro

David A. Price interviews Emmanuel Farhi in Econ Focus (Regional Federal Reserve Bank of Richmond, Second/Third Quarter 2019, pp. 18-23). Here are some tidbits:

On global safe assets

If you look at the world today, it\’s very much still dollar-centric … The U.S. is really sort of the world banker. As such, it enjoys an exorbitant privilege and it also bears exorbitant duties. Directly or indirectly, it\’s the pre-eminent supplier of safe and liquid assets to the rest of the world. It\’s the issuer of the dominant currency of trade invoicing. And it\’s also the strongest force in global monetary policy as well as the main lender of last resort.

If you think about it, these attributes reinforce each other. The dollar\’s dominance in trade invoicing makes it more attractive to borrow in dollars, which in turn makes it more desirable to price in dollars. And the U.S. role as a lender of last resort makes it safer to borrow in dollars. That, in turn, increases the responsibility of the U.S. in times of crisis. All these factors consolidate the special position of the U.S.

But I don\’t think that it\’s a very sustainable situation. More and more, this hegemonic or central position is becoming too much for the U.S. to bear.

The global safe asset shortage is a manifestation of this limitation. In my view, there\’s a growing and seemingly insatiable global demand for safe assets. And there is a limited ability to supply them. In fact, the U.S. is the main supplier of safe assets to the rest of the world. As the size of the U.S. economy keeps shrinking as a share of the world economy, so does its ability to keep up with the growing global demand for safe assets. The result is a growing global safe asset shortage. It is responsible for the very low levels of interest rates that we see throughout the globe. And it is a structural destabilizing force for the world economy. … 

In my view, the global safe asset shortage echoes the dollar shortage of the late 1960s and early 1970s. At that time, the U.S. was the pre-eminent supplier of reserve assets. The global demand for reserve assets was growing because the rest of the world was growing. And that created a tension, which was diagnosed by Robert Triffin in the early \’60s: Either the U.S. would not satisfy this growing global demand for reserve assets, and this lack of liquidity would create global recessionary forces, or the U.S. would accommodate this growing global demand for reserve assets, but then it would have to stretch its capacity and expose itself to the possibility of a confidence crisis and of a run on the dollar. In fact, that is precisely what happened. Eventually, exactly like Triffin had predicted, there was a run on the dollar. It brought down the Bretton Woods system: The dollar was floated and that was the end of the dollar exchange standard.

Today, there is a new Triffin dilemma: Either the U.S. does not accommodate the growing global demand for safe assets, and this worsens the global safe asset shortage and its destabilizing consequences, or the U.S. accommodates the growing global demand for safe assets, but then it has to stretch itself fiscally and financially and thereby expose itself to the possibility of a confidence crisis. …
Basically, I think that the role of the hegemon is becoming too heavy for the U.S. to bear. And it\’s only a matter of time before powers like China and the eurozone start challenging the global status of the dollar as the world\’s pre-eminent reserve and invoicing currency. It hasn\’t happened yet. But you have to take the long view here and think about the next decades, not the next five years. I think that it will happen. 

For a readable overview of Farhi\’s views on global safe assets, a useful start is \”The Safe Assets Shortage Conundrum,\” which he wrote with Ricardo J. Caballero and Pierre-Olivier Gourinchas, in the Summer 2017 issue of the Journal of Economic Perspectives (31:3, pp. 29-46. )

On some implications for public finance if many economic agents aren\’t fully rational and don\’t pay full attention to taxes 

There is a basic tenet of public taxation called the dollar-for-dollar principle of Pigouvian taxation. It says that if the consumption of a particular good generates a dollar of negative externality, you should impose a dollar of tax to correct for this exter­nality. For example, if consuming one ton of carbon generates a certain number of dollars of externalities, you should tax it by that many dollars.

But that relies on the assumption that firms and households correctly perceive this tax. If they don\’t — maybe they aren\’t paying attention — then you have to relax this principle. For example, if I pay 50 percent attention to the tax, the tax needs to be twice as big. That\’s a basic tenet of public finance that is modified when you take into account that agents are not rational.

In public finance, there is also a traditional presumption that well-calibrated Pigouvian taxes are better than direct quantity restriction or regulations because they allow people to express the intensity of their preferences. Recognizing that agents are behavioral can lead you to overturn this prescription. It makes it hard to calibrate Pigouvian taxes, and it also makes them less efficient. Cruder and simpler remedies, such as regulations on gas mileage, are more robust and become more attractive.

Aggregate production functions, the disaggregation problem, and the Cambridge-Cambridge controversy

There\’s an interesting episode in the history of economic thought. It\’s called the Cambridge-Cambridge controversy. It pitted Cambridge, Massachusetts — Solow, Samuelson, people like that — against Cambridge, U.K. — Robinson, Sraffa, Pasinetti. The big debate was the use of an aggregate production function.

Bob Solow had just written his important article on the Solow growth model. That\’s the basic paradigm in economic growth. To represent the possibility frontiers of an economy, he used an aggregate production function. What the Cambridge, U.K., side attacked about this was the idea of one capital stock, one number. They argued that capital was very heterogeneous. You have buildings, you have machines. You\’re aggregating them up with prices into one capital stock. That\’s dodgy.

It degenerated into a highly theoretical debate about whether or not it\’s legitimate to use an aggregate production function and to use the notion of an aggregate capital stock. And the Cambridge, U.K., side won. They showed that it was very problematic to use aggregate production functions. Samuelson conceded that in a beautiful paper constructing a disaggregated model that you could not represent with an aggregate production function and one capital stock.

But it was too exotic and too complicated. It went nowhere. The profession moved on. Today, aggregate production functions are pervasive. They are used everywhere and without much questioning. One of the things David [Baqaee] and I are trying to do is to pick up where the Cambridge-Cambridge controversy left. You really need to start with a completely disaggregated economy and aggregate it up. … 

We have a name for our vision. We call it \”macro as explicitly aggregated micro.\” The idea is you need to start from the very heterogeneous microeconomic environment to do justice to the heterogeneity that you see in the world and aggregate it up to understand macroeconomic phenomena. You can\’t start from macroeconomic aggregates. You really want to understand the behavior of economic aggregates from the ground up.
For example, you can\’t just come up with your measure of aggregate TFP [total factor productivity] and study that. You need to derive it from first principles. You need to understand exactly what aggregate TFP is. I talked about aggregate TFP and markups, but the agenda is much broader than that. It bears on the elasticity of substitution between factors: between capital and labor, or between skilled labor, unskilled labor, and capital. It bears on the macroeconomic bias of increasing automation. It bears on the degree of macroeconomic returns to scale underlying endogenous growth. It bears on the gains from trade and the impact of tariffs. In short, it is relevant to the most fundamental concepts in macroeconomics.

For a retrospective recounting of what happened in the Cambridge-Cambridge controversies, a useful starting point is Avi J. Cohen and G. C. Harcourt. 2003. \”Retrospectives: Whatever Happened to the Cambridge Capital Theory Controversies?\” Journal of Economic Perspectives, 17 (1): 199-214.

Fentanyl and Synthetic Opioids: What\’s Happening, What\’s Next

The US had 50,000 opioid-involved overdose deaths in 2019. This is similar to the number of people who died of AIDS at the peak of that crisis in 1995. For comparison, total deaths in car crashes is about 40,000 per year. My dark suspicion is that the opioid crisis gets less national media attention because its worse effects are concentrated in parts of Appalachia, New England, and certain mid-Atlantic states, rather than in big coastal cities.

For a thoughtful overview of the topic, I recommend The Future of Fentanyl and Other Synthetic Opioids, a book by Bryce Pardo, Jirka Taylor, Jonathan P. Caulkins, Beau Kilmer, Peter Reuter, Bradley D. Stein (RAND Institute 2019). Here are some points that caught my eye, but there\’s a lot more detail in the book.

\”Although the media and the public describe an opioid epidemic, it is more accurate to think of it as a series of overlapping and interrelated epidemics of pharmacologically similar substances—the opioid class of drugs. Ciccarone (2017, p. 107) refers to a “triple wave epidemic”: The first wave was prescription opioids, the second wave was heroin, and the third—and ongoing—wave is synthetic opioids, such as fentanyl.\”

Fentanyl and other synthetic opioids have been around for decades. Indeed, the book describe four previous US episodes where there was a localized surge of production followed by a number of deaths. What makes it different this time around? The answer seems to be production of very cheap and powerful synthetic opioids in China. The report notes (citations and footnotes omitted):  

The current wave of overdoses is largely attributable to illicitly manufactured fentanyl. Most of the fentanyl and novel synthetic opioids in U.S. street markets—as well as their precursor chemicals—originate in China, where the regulatory system does not effectively police the country’s expansive pharmaceutical and chemical industries According to federal law enforcement, synthetic opioids arrive in U.S. markets directly from Chinese manufacturers via the post, private couriers (e.g., UPS, FedEx), cargo, by smugglers from Mexico, or by smugglers from Canada after being pressed into counterfeit prescription pills. … . The U.S. Drug Enforcement Administration (DEA) suggests that some portion of fentanyl might be produced in Mexico using precursors from China. …

China’s large and underregulated pharmaceutical and chemical industries create opportunities for anyone with access to the inputs to synthesize fentanyl or manufacture precursors. Mexican DTOs, which have a history of importing methamphetamine precursors from China, are now importing fentanyl precursors. Today, illicit fentanyl is no longer manufactured by a single producer in a clandestine laboratory. … China’s economy, particularly its pharmaceutical and chemical industries, have grown at levels that outpace regulatory oversight, allowing suppliers to avoid regulatory scrutiny and U.S. law enforcement. 

A related issue is that fentanyl and the synthetic opoids coming from China are extremely powerful, and in terms of morphine-equivalent dose–that is, MED–are much cheaper than heroin. 

Synthetic opioids coming from China are much cheaper than Mexican heroin on a potency-adjusted basis …  Recent RAND Corporation research identified multiple Chinese firms that are willing to ship 1 kg of nearly pure fentanyl to the United States for $2,000 to $5,000. In terms of the morphine-equivalent dose (MED; a common method of comparing the strength of different opioids), a 95-percent pure kg of fentanyl at $5,000 would generally equate to less than $100 per MED kg. For comparison, a 50-percent pure kg of Mexican heroin that costs $25,000 when exported to the United States would equate to at least $10,000 per MED kg. Thus, heroin appears to be at least 100 times more expensive than fentanyl in terms of MED at the import level.

Given that fentanyl and synthetic opioids are extremely cheap and powerful, transporting them becomes much simpler, even by regular mail. 

Likewise, rising trade and e- commerce originating in China since 2011 facilitate the diffusion of potent synthetic opioids. Drug distribution has been further facilitated by the advent of cryptocurrencies and anonymous browsing software. …

Fentanyl\’s potency-to-weight ratio makes it ideal for smuggling. A small amount of fentanyl can be easily concealed through traditional conveyances, packed in vehicles or hidden on the person. The supply of minute amounts of fentanyl through mail and private package services is profitable to someone who can redistribute it to local markets; even an ounce of fentanyl can substitute for 1 kg of heroin. … In 2011, postal services of the United States and China entered into an agreement to streamline mail delivery and reduce shipping costs for merchandise originating in China. This “ePacket” service is designed for shipping consumer goods (under 2 kg) from China directly and rapidly to customers ordering items online … In FY 2012, USPS handled about 27 million ePackets from China. This increased to nearly 500 million ePackets by 2017. This figure does not include items from China arriving by cargo or private consignment operators, such as DHL or FedEx. …

For reference, if the total U.S. heroin market was on the order of 45 pure metric tons (45,000 kg; Midgette et al., 2019) before fentanyl and if fentanyl is 25 times more potent than heroin, then it would only take 1,800 1-kg parcels to supply the same amount of MEDs to meet the demand for the entire U.S. heroin market. …

Today, shipping costs from China are negligible. A 1-kg parcel can be shipped from China to the United States for as little as $10 through the international postal system or for $100 by private consignment  operator. 

Fentanyl and synthetic opioids present a number of challenges for conventional approaches to drug enforcement. For example, typical approaches assume that users of a certain drug have a demand for it, and enforcement efforts can  raising the price to reduce that demand. For those who become addicted, one can then offer treatment, or some advocate for offering zones for safer use of the drug.

But fentanyl and other synthetic opioids are different. Fentanyl is so extraordinarily cheaper than other opioids that even if additional enforcement was able to drive the price up five-fold or ten-fold, it would still be extraordinarily cheaper. It spreads not so much because users have a demand for these products. but because suppliers are cutting costs. The report says: \”Indeed, many of fentanyl’s victims did not want or even know that they were using it.\” 

Treatment for prescription opioids or heroin often recognizes that the first round of treatment may not work, but repeated efforts can eventually work for many people. But fentanyl and synthetic opioids are deadly enough that more people are going to die while these cycles of treatment are ongoing, so while such treatment may still be cost-effective, it\’s success rate will be lower. Here\’s a discussion of the experience of treatment and harm reduction programs in Vancouver, when they collided with fentanyl: 

Fentanyl’s challenge to treatment and harm reduction is etched starkly in Vancouver’s death rate. Few cities have embraced treatment and harm reduction more energetically than Vancouver. Before fentanyl, that seemed to have worked well; HIV/AIDS was contained and heroin overdose death rates in British Columbia fell from an average of eight per 100,000 people from 1993 to 1999 to five per 100,000 from 2000 to 2012. However, those policies, programs, and services have been challenged by fentanyl. British Columbia now has one of the highest opioid-related death rates (more than 30 per 100,000 in 2017 and 2018), which is higher than that in all but five U.S. jurisdictions. The rate in Vancouver’s health service delivery area is even higher (55 per 100,000 people). These death rates are high, not only relative to opioid overdose deaths elsewhere but also in absolute terms. It is hard for many people who are not epidemiologists to understand whether death rates of 30 or 55 per 100,000 are large or small, so it might be useful to contrast them with death rates in the United States from homicide (4.8 per 100,000) and traffic crashes (12.3 per 100,000), which are familiar, widely discussed, and often pertain to premature deaths of people.

Of course, those harm-reduction policies could be saving many lives. Presumably, death rates would be higher if not for those efforts. However, the current approach fails to cope with fentanyl or heroin in absolute terms.

The potential implications for public policy are worth considering. The report discusses one possible scenario in this way: 

Over the long term, it is important to acknowledge that a new era could be coming when synthetic opioids are so cheap and ubiquitous that supply control will become less cost-effective. Falling prices and a pivot to treatment and harm reduction need not be an unhappy scenario for law enforcement. Freeing law enforcement of the obligation to squelch supply across the board could allow it to focus on the most-noxious dealers and organizations and strive to minimize violence and corruption per kilogram delivered, rather than the number of kilograms supplied. In a way, this would let law enforcement focus on public safety, rather than an addiction prevention mission. Also … falling prices might reduce the amount of economic-compulsive crime committed as a means to finance drug purchase.

Previous outbreaks of the use of fentanyl and synthetic opioids going back to the 1980s had a very small number of producers and a limited distribution system, so once law enforcement shut down the producer, it was over. When it comes to trade issues with China, my own preference would be to have considerably less emphasis on tariffs for legal goods, and considerably more emphasis on coordinated efforts to shut down illegal production of synthetic opioids. But collaboration between the US and China is out of favor, and even if it could be done, the knowledge of how to make and sell fentanyl and synthetic opioids is now out there in a globalizing world economy. Stuffing that evil genie back in its bottle will be very difficult.  
At present, the harms of the US opioid crisis are somewhat concentrated geographically: 

[T]he ten states with the highest synthetic opioid overdose death rates in 2017 are, in order: West Virginia, Ohio, New Hampshire, Maryland, Massachusetts, Maine, Connecticut, Rhode Island, Delaware, and Kentucky. … Although these ten states constituted 12 percent of the country’s population, they made up 35 percent of the 28,500 fatal overdoses involving synthetic opioids in 2017. Ohio’s share of fatalities alone was almost 12.5 percent, while the state made up about 3.5 percent of the country’s total population.

An alarming implication here is that if synthetic opioids break out of these 10 states and have a similar effect elsewhere, the already awful death toll could multiply.
For some earlier posts that (as usual) include links to other writing on the subject, see:

The Rise of Global Trade in Services

Our mental images of \”global trade\” are usually about goods: cars and steel, computers and textiles, oil and home appliances, and so on. But in the next few decades, most of the action in terms of increasing global trade is likely to be in services, not goods. More and more of the effects of trade on jobs is going to involve services, too. However, most of us are not used to thinking about countries import and export across national borders transportation services, financial services, tourism, construction, health care and education services, or many others.  The 2019 World Trade Report from the World Trade Organization focuses on the theme, \”The future of services trade.\”  Here are some tidbits from the report (citations and references to figures omitted):

Services now seem to be transforming international trade in similar ways. Although they still only account for one fifth of cross-border trade, they are the fastest growing sector. While the value of goods exports has increased at a modest 1 per cent annually since 2011, the value of commercial services exports has expanded at three times that rate, 3 per cent. The services share of world trade has grown from just 9 per cent in 1970 to over 20 per cent today – and this report forecasts that services could account for up to one-third of world trade by 2040. This would represent a 50 per cent increase in the share of services in global trade in just two decades.

There is a common perception that globalization is slowing down. But if the growing wave of services trade is factored in – and not just the modest increases in merchandise trade – then globalization may be poised to speed up again.

Of course, high-income countries around the world already have most of their GDP in the form of services. But it\’s not as widely recognized that emerging market economies already have a majority of their output in services, too, or very close.

Services already accounted for 76 per cent of GDP in advanced economies in 2015 – up from 61 per cent in 1980 – and this share seems likely to rise. In Japan, for example, services represent 68 per cent of GDP; in New Zealand, 72 per cent; and in the US, almost 80 per cent. 

Emerging economies, too, are becoming more services-based – in some cases, at an even faster pace than advanced ones. Despite emerging as the “world’s factory” in recent decades, China’s economy is shifting dramatically into services. Services now account for over 52 per cent of GDP – a higher share than manufacturing – up from 41 per cent in 2005. In India, services now make up almost 50 per cent of GDP, up from just 30 per cent in 1970. In Brazil, the share of services in GDP is even higher, at 63 per cent. Between 1980 and 2015, the average share of services in GDP across all developing countries grew from 42 to 55 per cent.

Here\’s  figure showing the main services that are now being traded internationally. International trade in health care and education services is small, so far. The big areas at present are distribution services, financial services, telecom and computer services, transport services, and tourism. 

Like most big economic shifts, the rise in services trade is being driven by multiple factors. One is that advances in communications and information technology are making it vastly cheaper to carry out a service in one location and then to deliver it somewhere else. Another is that services are becoming a bigger part of the output of companies that, at first glance, seem focused mainly on goods. For example, car companies produce cars. But they also make a good share of their income providing services like financing, after-sales service of cars already sold, and customizing cars according to the desires of buyers. Another shift is what has been called \”premature deindustrialization,\” referring to the fact that much of the future output growth in manufacturing is likely to come from investments in robotics and automation, while most of the future jobs are likely to be in services. The report notes:

Just because the services sector is playing a bigger role in national economies, this does not mean that the manufacturing sector is shrinking or declining. Many advanced economies are “post-industrial” only in the sense that a shrinking share of the workforce is engaged in manufacturing. Even in the world’s most deindustrialized, services-dominated economies, manufacturing output continues to expand thanks to mechanization and automation, made possible in no small part by advanced services. For example, US manufacturing output tripled between 1970 and 2014 even though its share of employment fell from over 25 per cent to less than 10 per cent. The same pattern of rising industrial output and shrinking employment can be found in Germany, Japan and many other advanced economies. …

This line between manufacturing and services activities, which is already difficult to distinguish clearly, is becoming even more blurred across many industries. Automakers, for example, are now also service providers, routinely offering financing, product customization, and post-sales care. Likewise, on-line retailers are now also manufacturers, producing not only the computer hardware required to access their services, but many of the goods they sell on-line. Meanwhile, new processes, like 3D printing, result in products that are difficult to classify as either goods or services and are instead a hybrid of the two. This creative intertwining of services and manufacturing is one key reason why productivity continues to grow.

There are lots of issues here. For example, will the gains from trade in services end up benefiting mainly big companies, or mainly large urban areas, or will it allow small- and medium-sized firms in smaller cities or rural areas to have greater access to global markets? Many trade agreements about services are going to involve negotiating fairly specific underlying standards: for example, if a foreign insurance company or bank wants to do business in other countries, the solvency and business practices of that company will be a fair topic of investigation. 

The report offers lots of detail on services exports and imports around the world: for example, from advanced and developing countries, involving different kinds of services, as part of global value chains, the involvement of smaller and larger companies, the involvement of female and male workers, case studies of different aspects of services trade in India, China Kenya, Mexico, Philippines, and others. But overall, it seems clear that an ever-larger portion of international trade is going to be arriving electronically, not in a container-shipping compartment at a border stop or a port, and all of us are going to need to wrap our minds around the implications. 

Neuromyths about the Brain and Learning

\”Neuromyths are false beliefs, often associated with education and learning, that stem from misconceptions or misunderstandings about brain function. Over the past decade, there has been an increasing amount of research worldwide on neuromyths in education.\” The Online Learning Consortium has published an International report: Neuromyths and evidence-based practices in higher education. by the team of Kristen Betts, Michelle Miller,  Tracey Tokuhama-Espinosa,

Patricia A. Shewokis, Alida Anderson,  Cynthia Borja,  Tamara Galoyan,  Brian Delaney, 
John D. Eigenauer, and Sanne Dekker. 
They draw on previous surveys and information about \”neuromyths\” to construct their own online survey, which was sent to people inn higher education The response rate was low, as is common with online surveys, so consider yourself warned. But what\’s interesting to me is to read the \”neuromyths\” and to consider your own susceptibility to them. More details at the report itself, of course.

Homage to Bill Goffe for spotting this report. 

A Nobel for the Experimental Approach to Global Poverty for Banerjee, Duflo, and Kremer

Several decades ago, the most common ways of thinking about problems of poor people in low-income countries involved ideas like the \”poverty trap\” and the \”dual economy.\” The \”poverty trap\” was the idea low-income countries were close to subsistence, so it was hard for them to save and make the investments that would lead to long-term growth. The \”dual economy\” idea was that low-income countries had both traditional and a modern parts of their economy, but the traditional part had large numbers of subsistence-level workers. Thus, if or when the modern part of the economy expanded, it could draw on this large pool of subsistence level workers and so there was no economic pressure for subsistence wages to rise. In either case, a common policy prescription was that low-income countries needed a big infusion of capital, probably from a source like the World Bank, to jump-start their economies into growth.

These older theories of economic development captured some elements of global poverty, but in many of their details and implications have proved unsatisfactory for the modern world. (Here\’s an essay on \”poverty trap\” thinking, and another on \”dual economy\” thinking.) For example, it turns out that low-income countries often do have sufficient saving to make investments in the future. Also, in a globalizing economy, flows of private investment capital along with remittances sent back home from emigrants far outstrip official development assistance. Moreover, there have clearly been success stories in which some low-income countries have escaped the poverty trap and the dual economy and moved to rapid growth, including China, India, other nations of east Asia, Botswana, and so on. 
Of course, it remains important that low-income countries avoid strangling their own economies with macroeconomic mismanagement, overregulation, or corruption. But a main focus of thinking about economic development shifted from how to funnel more resources to these countries to what kind of assistance would be most effective for the lives of the poor. The 2019 Nobel prize in economics was awarded “for their experimental approach to alleviating global poverty” to Abhijit Banerjee, Esther Duflo, and Michael Kremer.  To understand the work, the Nobel committee publishes two useful starting points: a \”Popular Science\” easy-to-read overview called \”Research to help the world’s poor,\” and a longer and more detailed\”Scientific Background\” essay on \”Understanding Development and Poverty Alleviation.\”

In thinking about the power of their research, it\’s perhaps useful to hearken back to long-ago discussions of basic science experiments. For example, back in 1881 Louis Pasteur wanted to test his vaccine for sheep anthrax. He exposed 50 sheep to anthrax. Of those 50, half chosen at random had been vaccinated. The vaccinated sheep lived and others died. 
Social scientists have in some cases been able to use randomized trials in the past. As one recent example, the state of Oregon wanted to expand Medicaid coverage back in 2008, but it only had funding to cover an additional 10,000 people. It chose those people through a lottery, and thus set up an experiment about how having health insurance affected the health and finances of the working poor (for discussions of some results, see here and here). In other cases, when certain charter high schools are oversubscribed and use a lottery to choose students, it sets up a random experiment for comparing students who gained admission to those schools with those who did not
The 2019 laureates took this ideas of social science experiments and brought it to issues of poverty and economic development. They went to India and Kenya and low-income countries around the world. They arranged with state and local governments to carry out experiments where, say, 200 villages would be selected, and then 100 of those villages at random would receive a certain policy intervention. Just dealing with the logistics of making this happen–for different interventions, in different places–would deserve a Nobel prize by itself. 
Many of the individual experiments focus on quite specific policies. However, as a number of these experimental results accumulate, broader lessons become clear. For example, consider the question of how to improve educational outcomes in low-income countries. Is the problem a lack of textbooks? A lack of lunches? Absent teachers? Low-quality teachers? Irregular student attendance? An overly rigid curriculum? A lack of lights at home that make it hard for students to study? Once you start thinking along these lines, you can think about randomized experiments that address each of these factors and others, separately and in various combinations. From the Nobel committee\’s \”Popular Science Background\”:

Kremer and his colleagues took a large number of schools that needed considerable support and randomly divided them into different groups. The schools in these groups all received extra resources, but in different forms and at different times. In one study, one group was given more textbooks, while another study examined free school meals. Because chance determined which school got what, there were no average differences between the different groups at the start of the experiment. The researchers could thus credibly link later differences in learning outcomes to the various forms of support. The experiments showed that neither more textbooks nor free school meals made any difference to learning outcomes. If the textbooks had any positive effect, it only applied to the very best pupils. 

Later field experiments have shown that the primary problem in many low-income countries is not a lack of resources. Instead, the biggest problem is that teaching is not sufficiently adapted to the pupils’ needs. In the first of these experiments, Banerjee, Duflo et al. studied remedial tutoring programmes for pupils in two Indian cities. Schools in Mumbai and Vadodara were given access to new teaching assistants who would support children with special needs. These schools were ingeniously and randomly placed in different groups, allowing the researchers to credibly measure the effects of teaching assistants. The experiment clearly showed that help targeting the weakest pupils was an effective measure in the short and medium term.

Such experiments have been done in a wide range of contexts. For example, what about issues of improving health? 

One important issue is whether medicine and healthcare should be charged for and, if so, what they should cost. A field experiment by Kremer and co-author investigated how the demand for deworming pills for parasitic infections was affected by price. They found that 75 per cent of parents gave their children these pills when the medicine was free, compared to 18 per cent when they cost less than a US dollar, which is still heavily subsidised. Subsequently, many similar experiments have found the same thing: poor people are extremely price-sensitive regarding investments in preventive healthcare. …

Low service quality is another explanation why poor families invest so little in preventive measures. One example is that staff at the health centres that are responsible for vaccinations are often absent from work. Banerjee, Duflo et al. investigated whether mobile vaccination clinics – where the care staff were always on site – could fix this problem. Vaccination rates tripled in the villages that were randomly selected to have access to these clinics, at 18 per cent compared to 6 per cent. This increased further, to 39 per cent, if families received a bag of lentils as a bonus when they vaccinated their children. Because the mobile clinic had a high level of fixed costs, the total cost per vaccination actually halved, despite the additional expense of the lentils.

How much do the lives of low-income people change from receiving access to credit? For example, does it change their consumption, or encourage them to start a business? If farmers had access to credit, would they be more likely to invest in fertilizer and expand their output?

As the body of experimental evidence accumulates, it begins to open windows on the lives of people in low-income countries, on issues of how they are actually making decisions and what constraints matter most to them. The old-style approach to development economics of sending money to low-income countries is replaced by policies aimed at specific outcomes: education, health, credit, use of technology. When it\’s fairly clear what really matters or what really helps, and the policies are expanded broadly, they can still be rolled out over a few yeas in a randomized way, which allows researchers to compare effects of those who experience the policies sooner to those who experienced them later. This approach to economic development has a deeply evidence-based practicality. 

For more on these topics, here are some starting points from articles in the Journal of Economic Perspectives, where I labor in the fields as Managing Editor: 
On the specific research on experimental approaches to poverty, Banerjee and Duflo coauthored Addressing Absence (Winter 2006 issue), about an experiment to provide incentives for teachers in rural schools to improve their attendance, and Giving Credit Where It Is Due (Summer 2010 issue), about experiments related to providing credit and how it affects the lives of poor people. 
I\’d also recommend a pair of articles that Banerjee and Duflo wrote for JEP where they focus on the economic lives of those in low-income countries: \”the choices they face, the constraints they grapple with, and the challenges they meet.\” The first paper focuses in the extremely poor. \”The Economic Lives of the Poor (Winter 2007), while the other looks at those who are classified as \”middle class\” by global standards, \”What Is Middle Class about the Middle Classes around the World? (Spring 2008). 

From Kremer, here are a couple of JEP papers focused on development topics not directly related to the experimental agenda: \”Pharmaceuticals and the Developing World\” (Fall 2002) and \”The New Role for the World Bank\” (written with Michael A. Clemens, Winter 2016).

Finally, the Fall 2017 issue of JEP had a three paper symposium on the issues involved in moving from a smaller-scale experiment to a scalable policy. The papers are: 

Some Income Tax Data on the Top Incomes

How much income do US taxpayers have at the very top? How much do they pay in taxes? The IRS has just published updated date for 2017 on \”Individual Income Tax Rates and Tax Shares.\”  Here, I\’ll focus on data for 2017 and \”returns with Modified Taxable Income,\” which for 2017 basically means the same thing as returns with taxable income. Here are a couple of tables for 2017 derived from the IRS data.

The first table shows a breakdown for taxpayers from the top .001% to the top 5%. Focusing on the top .001% for a moment, there were 1,433 such taxpayers in 2017. (You\’ll notice that the number of taxpayers in the top .01%, .1% and 1% rise by multiples of  10, as one would expect.)

The \”Adjusted Gross Income Floor\” tells you that to be in the top .001% in 2017, you had to have income of $63.4 million in that year. If you had income of more than $208,000, you were in the top 5%,

The total income for the top .001% was $256 billion. Of that amount, the total federal income tax paid was $61.7 billion. Thus, the average federal income tax rate paid was 24.1% for this group. The top .001% received 2.34% of all gross income, and paid 3.86% of all income taxes.

Of course, it\’s worth remembering that this table is federal income taxes only. It doesn\’t include state taxes on income, property, or sales.  It doesn\’t include the share of corporate income taxes that end up being paid indirectly (in the form of lower returns) by those who own corporate stock.

Here\’s a follow-up table showing the same information, but for groups ranging from the top 1% to the top 50%.

Of course, readers can search through these tables for what is of most interest to them. But here are af few quick thoughts of my own.

1) Those at the very tip-top of the income distribution, like the top .001% or the top .01%, pay a slightly lower share of income in federal income taxes than say, the top 1%. Why? I think it\’s because those at the very top are often receiving a large share of their annual income in the form of  capital gains, which are taxed at a lower rate than regular income.

2) It\’s useful to remember that many of those at the very tip-top are not there every year. It\’s not like the fall into poverty the next year, of course. But they are often making a decision about when to turn capital gains into taxable income, and they are people who–along with their well-paid tax lawyers– have some control over the timing of that decision and how the income will be received.

3) The average tax rate shown here is not the marginal tax bracket. The top federal tax bracket is 37% (setting aside issues of payroll taxes for Medicare and how certain phase-outs work as income rises). But that marginal tax rate applies only to an additional dollar of regular income earned. With deductions, credits, exemptions, and capital gains taken into account, the average rate of income tax a as a share of total income is lower.

4) The top 50% pays almost all the federal income tax. The last row on the second table shows that the top 50% pays 96.89% of all federal income taxes. The top 1% pays 38.47% of all federal income taxes. Of course, anyone who earns income also owes federal payroll taxes that fund Social Security and Medicare, as well as paying federal excise taxes on gasoline, alcohol, and tobacco, and these taxes aren\’t included here.

5) This data is about income in 2017. It\’s not about wealth, which is accumulated over time. Thus, this data is relevant for discussions of changing income tax rates, but not especially relevant for talking about a wealth tax.

6) There\’s a certain mindset which looks at, say, the $2.3 trillion in total income for the top 1%, and notes that the group is \”only\” paying $615 billion in federal income taxes, and immediately starts thinking about how the federal government could collect a few hundred billion dollars more from that group, and planning how to spend that money. Or one might focus further up, like the 14,330 in the top .01%  who had more than $12.8 million in income in 2017. Total income for this group was $565 billion, and they \”only\” paid about 25% of it in federal income taxes. Surely they could chip in another $100 billion or so? On average, that\’s only about $7 million apiece in additional taxes for those in the top .01%. No big deal. Raising taxes on other people is so easy.

I\’m not someone who spends much time weeping about the financial plight of the rich, and I\’m not going to start now. It\’s worth remembering (again) that the numbers here are only for federal income tax, so if you are in a state or city with its own income tax, as well as paying property taxes and the other taxes at various levels of government, the tax bill paid by those with high incomes is probably edging north of 40% of total income in a number of jurisdictions.

But let\’s set aside the question of whether the very rich can afford somewhat higher federal income taxes (spoiler alert: they can), and focus instead on the total amounts of money available. The numbers here suggest that somewhat higher income taxes at the very top could conceivably bring in a few hundred billion dollars, even after accounting for the ability of those with very high income to alter the timing and form of the income they receive. To put this amount in  perspective, the federal budget deficit is now running at about $800 billion per year.  To put it another way, it seems implausible to me that plausibly higher taxes limited to those with the highest incomes would raise enough to get the budget deficit down to zero, much less to bridge the existing long-term funding gaps for Social Security or Medicare, oi to support grandiose spending programs in the trillions of dollars for other purposes. Raising federal income taxes at the very top may be a useful step, but it\’s not a magic wand that can pay for every wish list. 

Video Clips of Economists Explaining for Intro Econ Classes

I know a number of economics faculty who have been incorporating video clips into their classes. Sometimes it\’s part of a lecture presentation. Sometimes it\’s for students to watch before class. For intro students in particular, it can be a useful practice because it gives them a sense that they are being introduced to a universe of economists, not just to one professor and a textbook. The faculty member can also react to the video clip, and in this way offer students some encouragement to react and to comment as well–in a way that students might not feel comfortable reacting if they need to confront their own professor.

Amanda Bayer and Judy Chevalier have been compiling a list of video clips that may be useful for the standard intro econ class. It\’s available at the Diversifying Economic Quality (\”Div.E.Q\”) website.  Most are in the range of 3-6 minutes, although a few are longer or shorter. The economists are often talking about their own research, but in a way that the evidence can easily be incorporated into an intro presentation.

For a few examples grabbed from lectures on micro topics. Kathryn Graddy talks about her work studying the Fulton Fish Market in New York City, and how even in a highly competitive and open environment, buyers sometimes pay different prices. (Graddy also wrote an article on this topic in the Spring 2006 issue of the Journal of Economic Perspectives.)

Petra Moser discusses her work showing that \”copyright protection for 19th century Italian operas led to more and better operas being written, but the evidence also suggests that intellectual property rights may do more harm than good if they are too broad or too long-term.\”

Heidi Williams describes new data and empirical methodogies to study and advance technological change in health care markets.

Kerwin Kofi Charles looks at his empirical research on how the extent to which prejudice leads to discrimination in the labor market and  how it may affect wages of black workers. 1

Cecilia Rouse talks about her research on how change in to blind procedures for the musicians auditioning for symphony orchestras led to more women being selected.

In short, the presenters in the video clips are top-quality economists describing their own research, in ways that spark interest among students. In addition, economics has an ongoing issue with attracting women and minorities. This list is heavily tilted toward presentations by economists from those groups, and there\’s some evidence that when intro students see economists who look more like them, they may feel more comfortable expressing interest in economics moving forward. 

Opinions about Semicolons

When you live your life as an editor, you develop strange preoccupations, like the semicolon. Thankfully, Cecilia Watson has removed any temptation I might have had to spend vast amount of time on this subject by publishing Semicolon: The Past, Present, and Future of a Misunderstood Mark (Ecco, 2019). 

If you\’re the sort of person who enjoys facts and commentary about punctuation, then welcome to our smallish club. For example, you will be able to answer the trivia question: What was the first book to use a semicolon, and name the publisher, author, and typesetter? The semicolon originated in Venice in 1494, during a time of time of great innovation in symbols of punctuation. Many swirls and lines and dashes and other symbols of punctuation were invented, and mostly discarded. But apparently, a printer and publisher named Allstud Manutius was the first to combine the comma and colon, and thus to create the semicolon. The book was De Aetna, by Piertro Bembo, a dialogue about climbing Mount Etna. The Bolognese type designer Francesto Griffo created the shape of the semicolon.

I especially enjoyed some of the more grandiose denunciations of the semicolon. Watson\’s book reminded me of Paul Robinson\’s essay several decades ago in the New Republic, \”The Philosophy of Punctuation Against the semicolon; for the period\” (April 26, 1980).  Robinson wrote:

Semicolons are pretentious and overactive. These days one seems to come across them in every other sentence. “These days” is alarmist, since half a century ago the German poet Christian Morgenstern wrote a brilliant parody, “Im Reich der Interpunktionen,” in which imperialistic semicolons are put to rout by an “antisemikolonbund” of periods and commas. Nonetheless, if the undergraduate essays I see are representative we are in the midst of an epidemic of semicolons. I suspect that the semicolon is so popular because it is the first fancy punctuation mark students learn of, and they assume that its frequent appearance will lend their writing a properly scholarly cast. Alas, they are only too right. But I doubt that they use semicolons in their letters. At least I hope they don’t.

More than half of the semicolons one sees, I would estimate, should be periods, and probably another quarter should be commas. Far too often, semicolons, like colons, are used to gloss over an imprecise thought. They place two clauses in some kind of relation to one another, but relieve the writer of saying exactly what that relation is. Even the simple conjunction “and,” for which they are often a substitute, has more content, since it suggests compatibility or logical continuity. (“And,” incidentally, is among the most abused words in the language. It is forever being exploited as a kind of neutral vocalization connecting two things that have no connection whatever.)

In exasperation I have tried to confine my own use of the semicolon to demarking sequences that contain internal commas and therefore might otherwise be confusing. I recognize that my reaction is extreme. But the semicolon has become so hateful to me that I feel almost morally compromised when I use it.

Or if you prefer a pithier comment on the colon, here\’s one from Kurt Vonnegut\’s 2005 book,  A Man Without A Country:

Here is a lesson in creative writing. First Rule: Do not use semicolons. They are transvestite hermaphrodites representing absolutely nothing. All they do is show you\’ve been to college. 

June Casagrande puts the problem in more prosaic terms (\”A Word, Please: Writers who use semicolons aren’t thinking about the reader,\” Los Angeles Times, July 23, 2015) 

Here’s a fun thing you can do with your writing: Take any two simple, clear sentences and use a semicolon to mush them into one. For example, imagine you have a paragraph with just two sentences. “The alarm went off. Joe hit the snooze.” Through the magic of semicolons, you can make that just one sentence: “The alarm went off; Joe hit the snooze.” Isn’t that a great idea?

This works just as well for long sentences that you want to mush into super-long ones: “On a stormy morning in January of 2015, the alarm in Joe Jacobson’s swanky Santa Monica condo went off, ushering in the morning with an ugly screech; Joe, a hung-over stockbroker deeply immersed in a dark, disturbing dream about the woman who’d broken his heart, reached for the clock and pounded the snooze button with the force of a jackhammer.”

When you understand how semicolons work, you see that any pair of sentences can be made one. Then, when you’re done, those longer Frankenstein sentences can themselves be mushed together, and so on and so on, until every paragraph you write is just one long sentence! Neat, huh? …

I’ll kill the facetiousness here and just be blunt: Semicolons are trouble. … They’re favored by writers who are so proud they know how to use semicolons that they’ll happily shortchange readers to show off their knowledge. They’re also a popular crutch among writers who don’t know how to manage all the information they want to convey, so they use semicolons to cobble it all into a single monstrous sentence. … 

So just about any time you have two sentences next to each other, you could make the case for using a semicolon to fashion them into one longer sentence. A lot of writers do. They do so not because they believe the results will be better for the reader. They do so because they forgot the reader. They saw an opportunity to put their punctuation savvy on proud display and forgot that, as every professional writer knows, short sentences are more digestible. That’s why, to me, semicolons cause more trouble than they’re worth.

Of course, the fact that a punctuation mark or a word can be misused doesn\’t mean that it can\’t be well-used. For example, Herman Melville\’s Moby Dick is perhaps the literary champion of semicolon use. Watson makes this case at some length, concluding: 

Moby Dick .. was .. around 210,000 words, but had 4000 semicolons. That\’s one for every 52 words. The semicolons are Moby-Dick\’s joints, allowing the novel the freedom of movement is needed to tour such a large and disparate collection of themes.

She also points out that Martin Luther King Jr.\’s Letter from a Birmingham Jail makes exquisite use of the semicolon, as a way of linking together and drawing out a painful meditation in a way that forces the reader to follow along without a full stop for breath. (For example, see the paragraph that starts, \”We have waited for more than 340 years for our constitutional and God given rights.\”)

So yes, the semicolon can require care in handling. But it offers a connectedness, continuation, and flexibility in situations when a period would create a break that is be too definite and firm a break, while a comma isn\’t enough of a pause. Watson writes:

The semicolon represents a way to slow down, to stop, and to think; it measures time more meditatively than the catchall dash, and it can\’t  be chucked thoughtlessly into just any sentence in place of just any other mark. … Semicoloned sentences cannot be dashed off.

The short book also offers an excuse to roam through other rules of grammar, like whether to split infinitives. Watson tend to be in favor of good writing, but against rules. Me, I\’m in favor of good writing, but I\’m also favor knowing the rules–in part so that you can know when it makes sense to break them.