Marijuana Policy: Choosing Between Disastrous or Unpalatable

Slowly and with considerable uncertainty, the United States is altering its marijuana laws. Mark A. R. Kleiman offers an overview of the state of play and the likely tradeoffs in \”The Public-Health Case for Legalizing Marijuana\” (National Affairs, Spring 2019).  He writes:

John Kenneth Galbraith once said that politics consists in choosing between the disastrous and the unpalatable. The case of cannabis, an illicit market with sales of almost $50 billion per year, and half a million annual arrests, is fairly disastrous and unlikely to get better. The unpalatable solution is clear: Congress should proceed at once to legalize the sale of cannabis — at least in states that choose to make it legal under state law — for recreational as well as \”medical\” use. …

First, as a practical matter, cannabis prohibition is no longer enforceable. The black market is too large to successfully repress. The choice we now face is not whether to make cannabis available, but whether its production and use should be legal and overt or illegal and at least somewhat covert. Second, because cannabis is compact and therefore easy to smuggle, a state-by-state solution is unworkable in the long run. States with tighter restrictions or higher taxes on marijuana will be flooded with products from states with looser restrictions and lower taxes. The serious question is not whether to legalize cannabis, but how.

Kleiman offers an overview of the legal status of marijuana, and also makes some key points about the evolution of the market.

Marijuana is a cheap high, even at the current illegal price, and legalization is likely to make it cheaper.

Cannabis, even as an illegal drug, is a remarkably cost-effective intoxicant, far cheaper than alcohol. For example, in New York City, where cannabis is still illegal, a gram of fairly high-potency material (say, 15% THC by weight) goes for about $10. A user can therefore obtain 150 milligrams of THC for $10, paying about 7 cents per milligram. Getting stoned generally requires around 10 milligrams of THC to reach the user\’s bloodstream, but the smoking process isn\’t very efficient; about half the THC in the plant gets burned up in the smoking process or is exhaled before it has been absorbed by the lungs. So a user would need about 20 milligrams of THC in plant material to get stoned, or a little less than $1.50 worth. For a user without an established tolerance, intoxication typically lasts about three hours. That works out to about 50 cents per stoned hour. … So it costs a typical man drinking beer about $4 to get drunk — typically for a couple of hours — and staying drunk costs an additional $1 per hour. That\’s at least double the price per hour stoned offered by the illicit cannabis market.

For a number of users, marijuana use has adverse health effects.

Over the past quarter-century, the population of \”current\” (past-month) users has more than doubled (to 22 million) and the fraction of those users who report daily or near-daily use has more than tripled (to about 35%). Those daily or near-daily users account for about 80% of the total cannabis consumed. Between a third and a half of them report the symptoms of Cannabis Use Disorder: They\’re using more, or more frequently, than they intend to; they\’ve tried to cut back or quit and failed; cannabis use is interfering with their other interests and responsibilities; and it\’s causing conflict with people they care about. … Frequent users report using about 1.5 grams (equivalent to three or four joints) per day of use. With increasing prevalence, increasing frequency, and increasing potency, the total amount of THC consumed has likely increased about sixfold since the early 1970s.

A \”state\’s rights\” approach isn\’t likely to work well for marijuana.

Cannabis is simply too easy to smuggle across state lines. If cannabis is cheap anywhere, it will be available and fairly cheap everywhere. The same would be true if states were to adopt starkly different tax or regulatory policies, as these would likely generate large price differences in their respective legal cannabis markets.  …

Even a very small difference would be more than enough to support a large illicit market, as the state and local taxation of tobacco has proven. New York State has fairly heavy tobacco taxes, and New York City adds a substantial local tax. Virginia, by contrast, taxes tobacco much more lightly. The result is that a pack of cigarettes that retails for under $5 in Virginia sells for $13 in New York City — a difference of $8 per pack. Due to this price gap in the legal tobacco market, more than half of all cigarettes sold in New York City are contraband: mostly genuine brand-name products purchased in bulk in Virginia and driven 250 miles to New York. There, they are resold for about $9 per pack by many of the same retailers who sell full-priced, legal cigarettes — mostly convenience stores in low-income neighborhoods. … 

The same would be true for product regulation: If Massachusetts allows the sale of the solid concentrates used for the dangerous practice of \”dabbing\” (flash-vaporizing a hefty chunk of concentrate with a blowtorch in order to inhale a huge dose all at once), then for New York to try to forbid it would be a virtual invitation to smuggle. The states with the lowest taxes and the loosest regulations would wind up effectively dictating policy to the rest of the country.

What might be some general directions for federal-level marijuana legislation?

What would a public-health-friendly legalization program look like? The goals of such a policy would be the elimination or near-elimination of the illicit market and its replacement with a licit market delivering product of certified purity and known chemical composition, while minimizing the growth in heavy or hazardous use and use by minors. Its means would include taxation or minimum unit pricing (to prevent the otherwise inevitable collapse of cannabis prices); product regulation; and limits on marketing to prevent the cannabis industry from promoting the misuse of its product the way alcohol sellers encourage heavy drinking. … Retail sales clerks — so-called \”bud-tenders,\” now paid the minimum wage plus a sales commission, and thus given strong incentives to encourage overconsumption — could also be licensed, required to have extensive training in pharmacology and in preventing and recognizing Cannabis Use Disorder, and bound to a fiduciary duty to give advice in the interests of the consumer rather than with the goal of maximizing sales. …Consumers could also be required, before being allowed to purchase cannabis, to pass a simple test showing they\’re aware of the risks and of basic precautions. More radically, they could be required to establish for themselves (and the stores could be required to enforce) a weekly or monthly purchase quota, as a nudge toward temperance. … All of this will have to be done in the face of fierce opposition from the for-profit cannabis industry, if there is one.

For a previous post on the evolution of marijuana laws and markets, see \”Canada Legalizes Marijuana: What\’s Up in Colorado and Oregon?\” (October 22, 2018).

The "Right" and "Wrong" Kind of Artificial Intelligence for Labor Markets

Sometimes technology replaces existing jobs. Sometimes it create new jobs. Sometimes it does both at the same time. This raises an intriguing question: Do we need to view the effects of technology on jobs as a sort of tornado blowing through the labor market? Or could we come to understand why some technologies have bigger effects on creating jobs, or supplementing existing jobs, than on replacing job–and maybe even give greater encouragement to those kinds of technologies?

Ajay Agrawal, Joshua S. Gans, and Avi Goldfarb tackle the issue of how artificial intelligence technologies can have differing effects on jobs in \”Artificial Intelligence: The Ambiguous Labor Market Impact of Automating Prediction\” (Journal of Economic Perspectives, Spring 2019, 33 (2): 31-50). Perhaps someday \”artificial intelligence\” will be indistinguishable from human intelligence. But the authors argue that at present, most of the developments in AI are really about \”machine learning,\” which involves using computing power to make more accurate predictions from data. They write (citations omitted):

The majority of recent achievements in artificial intelligence are the result of advances in machine learning, a branch of computational statistics. … Machine learning does not represent an increase in artificial general intelligence of the kind that could substitute machines for all aspects of human cognition, but rather one particular aspect of intelligence: prediction. We define prediction in the statistical sense of using existing data to fill in missing information. As deep-learning pioneer Geoffrey Hinton said, “Take any old problem where you have to predict something and you have a lot of data, and deep learning is probably going to make it work better than the existing techniques.”

The authors are using \”prediction\” in a very broad sense: \”As an input into decision-making under uncertainty, prediction is essential to many occupations, including service industries: teachers decide how to educate students, managers decide who to recruit and reward, and janitors decide how to deal with a given mess.\” Here are a few examples from their paper, some fairly well-known, others less so. 

AI and Brain Surgery

For example, ODS Medical developed a way of transforming brain surgery for cancer patients. Previously, a surgeon would remove a tumor and surrounding tissue based on previous imaging (say, an MRI scan). However, to be certain all cancerous tissue is removed, surgeons frequently end up removing more brain matter than necessary. The ODS Medical device, which resembles a connected pen-like camera, uses artificial intelligence to predict whether an area of brain tissue has cancer cells or not. Thus, while the operation is taking place, the surgeon can obtain an immediate recommendation as to whether a particular area should be removed. By predicting with more than 90 percent accuracy whether a cell is cancerous, the device enables the surgeon to reduce both type I errors (removing noncancerous tissue) and type II errors (leaving cancerous tissue). The effect is to augment the labor of brain surgeons. Put simply, given a prediction, human decision-makers can in some cases make more nuanced and improved choices. 

AI and Tax Law

Blue J Legal’s artificial intelligence scans tax law and decisions to provide firms with predictions of their tax liability. As one example, tax law is often ambiguous on how income should be classified. At one extreme, if someone trades securities multiple times per day and holds securities for a short time period, then the profits are likely to be classified as business income. In contrast, if trades are rare and assets are held for decades, then profits are likely to be classified by the courts as capital gains. Currently, a lawyer who takes on a case collects the specific facts, conducts research on past judicial decisions in similar cases, and makes predictions about the case at hand. Blue J Legal uses machine learning to predict the outcome of new fact scenarios in tax and employment law cases. In addition to a prediction, the software provides a “case finder” that identifies the most relevant cases that help generate the prediction.

AI and Office Cleaning

A&K Robotics takes existing, human-operated cleaning devices, retrofits them with sensors and a motor, and then trains a machine learning-based model using human operator data so the machine can eventually be operated autonomously. Artificial intelligence enables prediction of the correct path for the cleaning robot to take and also can adjust for unexpected surprises that appear in that path. Given these predictions, it is possible to prespecify what the cleaning robot should do in a wide range of predicted scenarios, and so the decisions and actions can be automated. If successful, the human operators will no longer be necessary. The company emphasizes how this will increase workplace productivity, reduce workplace injuries, and reduce costs.

AI and Bail Decisions

Judges make decisions about whether to grant bail and thus to allow the temporary release of an accused person awaiting trial, sometimes on the condition that a sum of money is lodged to guarantee their appearance in court. Kleinberg, Lakkaraju, Leskovec, Ludwig, and Mullainathan (2018) study the predictions that inform this decision … Judges will continue to weigh the relative costs of errors, and in fact the US legal system requires human judges to decide. But artificial intelligence could enhance the productivity of judges. The main social gains here may not be in hours saved for judges as a group, but rather from the improvement in prediction accuracy. Police arrest more than 10 million people per year in the United States. Based on AIs trained on a large historical dataset to predict decisions and outcomes, the authors report simulations that show enhanced prediction quality could enable crime reductions up to 24.7 percent with no change in jailing rates or jailing rate reductions up to 41.9 percent with no increase in crime rates. In other words, if judicial output were measured in a quality-adjusted way, output and hence labor productivity could rise significantly. 

AI and Drug Discovery

A company called Atomwise uses artificial intelligence to enhance the drug discovery process. Traditionally, identifying molecules that could most efficiently bind with proteins for a given therapeutic target was largely based on educated guesses and, given the number of potential combinations, it was highly inefficient. Downstream experiments to test whether a molecule could be of use in a treatment often had to deal with a number of poor-quality candidate molecules. Atomwise automates the task of predicting which molecules have the most potential for exploration. Their software classifies foundational building blocks of organic chemistry and predicts the outcomes of real-world physical experiments. This makes the decision of which molecules to test more efficient. This increased efficiency, specifically enabling lower cost and higher accuracy decisions on which molecules to test, increases the returns to the downstream lab testing procedure that is conducted by humans. As a consequence, the demand for labor to conduct such testing is likely to increase. Furthermore, higher yield due to better prediction of which chemicals might work increases the number of humans needed in the downstream tasks of bringing these chemicals to market. In other words, automated prediction in drug discovery is leading to increased use of already-existing complementary tasks, performed by humans in downstream occupations.

Some of these examples fit the mental model that robots driven by AI are going to replace human workers. Other suggest that AI will make existing workers more productive. It has become common, when looking at effects of technology on labor markets, to focus on the idea that a given job  has a bunch of tasks. A new technology replace most or all of the tasks a certain job, that job may be eliminated. It the technology creates the need for a bunch of new tasks, brand-new job categories may be created. Or often, a new technology may just cause a job to evolve, by replacing some tasks and creating a need for other tasks to be carried out. 

These differing pathways suggest that it might be able to differentiate, at least to some extent, between uses of artificial intelligence that are especially likely to be efficiency-enhancing for existing workers and job-creating for others, and uses of artificial intelligence that are more likely to be job-replacing in a way that saves a little money for employers but doesn\’t have large efficiency gains. 

For example, an article in Axios described a discussion with James Manyika, director of the McKinsey Global Institute. Manyika notes that in doing AI research: \”If your goal is human-level capability, you\’re increasing the probability that you\’re doing substitutive work … If you were trying to solve this as an economic problem, you\’d want to develop AI algorithms or machines that are as different from humans as possible.\” Manyika suggests a few examples of AI-based research that are less likely to replace human workers, because they don\’t mimic human capabilities: \”augmented reality,\” \”AI systems that can predict how proteins are folded, or how to route trucks better,\” and \”robots that can see around corners, or register sounds outside our hearing range.\”

Daron Acemoglu and Pascual Restrepo tackle this question in a short nontechnical essay \”The Wrong Kind of AI? Artificial Intelligence and the Future of Labor Demand\” (IZA Discussion Paper No. 12292, April 2019)

\”Most AI researchers and economists studying its consequences view it as a way of automating yet more tasks. No doubt, AI has this capability, and most of its applications to date have been of this mold: e.g., image recognition, speech recognition, translation, accounting, recommendation systems, and customer support. But we do not need to accept that this as the primary way that AI can be and indeed ought to be used. …

It is possible that the ecosystem around the most creative clusters in the United States, such as Silicon Valley, excessively rewards automation and pays insu¢ cient attention to other uses of frontier technologies. This may be partly because of the values and interests of leading researchers (consider for example the ethos of companies like Tesla that have ceaselessly tried to automate everything). It is also partly because the prevailing business model and vision of the large tech companies, which are the source of most of the resources going into AI, have focused on automation and removing the (fallible) human element from the production process. …

All in all, even though we currently lack definitive evidence that research and corporate resources today are being directed towards the \”wrong\” kind of AI, the market for innovation gives no compelling reason to expect an efficient balance between different types of AI. If at this critical juncture insufficient attention is devoted to inventing and creating demand for, rather than just replacing, labor, that would be the \”wrong\” kind of AI from the social and economic point of view.

As one example, Acemoglu and Restrepo point out that individualized classroom teaching, enabled by AI, will not eliminate the need for teachers–and may even increase it. As they write: \”Educational applications of AI would necessitate new, more flexible skills from teachers (beyond what is available and what is being invested in now), and they would need additional resources to hire more teachers to work with these new AI technologies (after all, that is the point of the new technology, to create new tasks and additional demand for teachers).\” AI enabled-tools could go well beyond feeding students multiple-choice questions with continually adjusting levels of difficulty, and  provide a kind of feedback that is just different from what any classroom teacher can provide. 

Some Snapshots of the Global Energy Situation

\”Global primary energy grew by 2.9% in 2018 – the fastest growth seen since 2010. This occurred despite a backdrop of modest GDP growth and strengthening energy prices. At the same time, carbon emissions from energy use grew by 2.0%, again the fastest expansion for many years, with emissions increasing by around 0.6 gigatonnes. That’s roughly equivalent to the carbon emissions associated with increasing the number of passenger cars on the planet by a third.\” Spencer Dale offers these and other insights in his introduction to the the 2019 BP Statistical Review of World Energy. It\’s one of those books of charts and tables I try to check each year just to keep my personal perceptions of economic patterns connected to actual statistics.  Here are a few figures that jumped out at me. 

One main drive of the rise in world energy use is economic growth in emerging market countries. The horizontal axis of this figure shows average energy use per person. The vertical axis shows the cumulative share of total world population. The yellow line shows the pattern for 1978, while the green line shows four decades later in 2018. 
From the caption under the figure: \”In 2018, 81% of the global population lived in countries where average energy demand per capita was less than 100 GJ/head, two percentage points more than 20 years ago. However, the share of the global population consuming less than 75 GJ/head declined from 76% in 1998 to 57% last year. Average energy demand per capita in China increased from 17 GJ/head in 1978 to 97 GJ/head in 2018.\” The figure constructed from national-level data on average energy consumption. Thus, the big jump the blue line at right about 100 GJ/head is the population of China. Overall, the shift from the yellow to the blue line shows how energy consumption is rising in emerging market economies. 
The sources of global energy consumption are also shifting. Oil consumption as shown by the green line is falling as a share of global energy consumption. (Just to be clear, total consumption of oil-produced energy rose in 2018, but it\’s rising more slowly than overall energy consumption, so the share of the total declined.) Coal remains pretty much the same share of global energy consumption as it has been for the last 30 years. Natural gas has risen. Hydro power is about the same. Nuclear energy is about the same as the last 20 years. Renewables like wind and solar are up, but still only about 5% of total energy consumption. 
High oil prices, reducing the quantity demanded, are part of the economic picture as to why the share of energy produced by oil has declined. The figure shows oil prices back to the Pennsylvania oil boom of the 1860s, with the light-green line showing prices adjusted for inflation. Oil prices have been volatile since the 1970s, but they seem at present to be rising to the middle of their range over the last 4-5 decades. 
What about renewable energy and carbon emissions? From Spencer Dale\’s overview: 

Renewable energy appears to be coming of age, but to repeat a point I made last year, despite the increasing penetration of renewable power, the fuel mix in the global power system remains depressingly flat, with the shares of both non-fossil fuels (36%) and coal (38%) in 2018 unchanged from their levels 20 years ago. This persistence in the fuel mix highlights a point that the International Energy Agency (IEA) and others have stressed recently; namely that a shift towards greater electrification helps as a pathway to a lower carbon energy system only if it goes hand-in-hand with a decarbonization of the power sector. Electrification without decarbonizing power is of little use. … On the supply side, the growth in power generation was led by renewable energy, which grew by 14.5%, contributing around a third of the growth; followed by coal (3.0%) and natural gas (3.9%). China continued to lead the way in renewables growth, accounting for 45% of the global growth in renewable power generation, more than the entire OECD combined.

Here\’s is a figure showing how electricity is generated around the world. Coal still leads the way, by far. Natural gas is on the rise, while oil is dropping. \”Renewables,\” which is led by wind, but also includes solar and smaller categories like geothermal and biomass, is on the rise, but still under 10%.

Finally, here\’s a table showing carbon emissions in 2018. This is a trimmed-down version of a bigger table in the text. It mainly shows carbon emissions by region. (CIS is \”Commonwealth of Independent States,\” which refers to the remnants of what was once the Soviet Union.) Notice that the Asia-Pacific region accounts for half of all global carbon emissions, with China alone accounting for more than one-quarter of global carbon emissions. Also, the average annual growth rate of carbon emissions was negative for North America and for Europe from 2007-2017, but rising during that time frame in Asia Pacific, as well as Africa, the Middle East, and South/Central America. As I\’ve written before, a meaningful approach to limiting or reducing global carbon emissions will need to include North American and Europe, but our participation won\’t be nearly enough.

Where Will America Find Caregivers as its Elderly Population Rises?

As we look ahead two or three decades into the future, we know several demographic facts with an extremely high degree of confidence. We know that that the number of elderly people in the population will be rising, and as a result, the demand for long-term care services will rise substantially. We also know that the birthrate has been falling, and so this generation of the eventually-will-be-elderly has had fewer children than the previous generation.

Put these two demographic facts together with a current social pattern: a large share of the care received by elderly adults with disabilities has been unpaid care provided by their children. But that arrangement will not be sustainable, at least not in the same way, moving forward. Three recent essays written for the Peter G. Peterson Foundation as part of its \”US 2050: Research Projects\” lay out some dimensions of the problem.

For example, here\’s an overview of the coming patterns from Stipica Mudrazija in \”Work-Related Opportunity Costs of Providing Unpaid Family Care\” (citations and references to tables omitted):  

Currently, there are almost 13 million caregivers aged 20-64 providing care to 10 million older adults with limitations in daily activities. In addition to the adult children of care recipients (71%), unpaid working-age caregivers to this population include spouses (5%), other family members (17%), and nonrelatives (7%). Overall, these caregivers account for 6.7 percent of the population aged 20-64, but the provision of caregiving is highly unequally  distributed by age as the majority of caregivers are aged 50-64, and adults in this age group are more than three times as likely to be caregivers than those aged 20-49. Accounting for future population aging and trends in physical disability and adjusting for compositional changes of the future population, the number of caregivers needed to keep the current prevalence of unpaid caregiving constant would have to almost double. This implies that the proportion of unpaid family caregivers to older adults would have to increase by more than a half to 6.1 percent for adults aged 20-49 and 19.2 percent for those aged 50-64. 

Thus, one potential future is that about one-fifth of adults from ages 50-64 become unpaid caregivers to the elderly. Of course, this pathway has tradeoffs. Caregivers typically spend less time in paid work: 

Using data from the National Study of Caregiving, the author finds that caregivers are about 9 percentage points less likely to be employed than those that do not provide care. In addition, employed caregivers work 2.1 fewer hours per week than their non-caregiver peers.  The current annual work-related opportunity cost of unpaid care in the United States is about $67 billion, but these costs will more than double by 2050. 

Mudrazija points out that in the past, analyses have suggested \”that the economic benefits of unpaid family care in terms of savings to government programs outweigh work opportunity costs.\” But this pattern seems to be shifting: \”Therefore, future discussions of the role of unpaid family care should recognize that this is a finite and increasingly expensive resource.\”

Gal Wettstein and Alice Zulkarnain focus on the question, \”Will Fewer Children Boost Demand for Formal Caregiving?\”  They note: \”Today, 25 percent of all caregivers of elderly are adult children. However, while the parents of the Baby Boom generation had three children per household on average, the Boomers themselves only have two.\” People with fewer children are more likely to end up in nursing homes–probably in part because they lack access to unpaid care and support from children. \”The authors estimate that, among people over age 50, having one fewer child increases the probability of having spent a night in a nursing home in the last two years by 1.7 percentage points—a magnitude comparable to the effect of having poor self-reported health, or of being ten years older.\”

Put these factors together, and the demand for paid care for the elderly is likely to skyrocket: \”They extrapolate this finding to 2050, and estimate that the decline in fertility of the Baby Boom generation will increase formal care demand per person by an extra 8.6 percent. Combined with the expected tripling of the population over age 85, the authors estimate that formal care demand will increase by about 326 percent relative to the current formal care demand.\”

Kristin Butcher and Tara Watson raise the issue of \”Immigration and Tomorrow’s Elderly.\”  They find:

\”[A]lthough the majority of the population age 80 and up has some type of disability or difficulty, fewer than 10 percent of individuals in their 80s live in an institution. This suggests either that they are getting help that keeps them out of institutions, or that there is an unmet need for such help. The authors identify eight key occupations that may help elderly individuals age in place, such as nursing aides and housekeepers, and predict that these occupations as a share of the overall workforce will increase from 8.4 percent to 12 percent in 2050. Further, they find that immigrants are disproportionately represented in these occupations. Assuming that the ratios of immigrants to total number of workers are fixed within occupations, the authors estimate that 42 million foreign-born workers would be required to maintain current immigrant representation in these fields. This is significantly more than the 30 million immigrants that are projected to be working in the U.S. in 2050.\” 

To put their point in  my own words, the US has been leaning on its immigrant workforce to provide paid caregiving to the elderly. As the number of elderly who need paid caregiving rises sharply, if unpaid care doesn\’t double in quantity to fill the gap,  higher levels of immigration is one way of increasing the workforce of caregivers.

For some previous posts on the coming challenges of providing long-term care, along with some international perspective, see:

Also, some readers may be interested in digging further into the \”US 2050: Research Projects\” from the Peterson Foundation.  There are 31 papers on topics including population trends, early investments in children, employment and adult workers, caregiving (the focus of this post), retirement, and politics. 

The Global Paper Industry: Still on the Rise

Paper is an old industry, dating back to 100 BC in China. For several decades now, there have been predictions that paper would decline, as businesses converted to the \”paperless office\” and as people moved to reading online rather than on dead tree. How is that transition going? The short answer is \”only OK.\” For a longer answer, the Environmental Paper Network offers a review in The State of the Global Paper Industry, subtitled \”Shifting Seas: New Challenges and Opportunities for Forests, People and the Climate\” (April 2018).

The report notes (footnotes omitted):

Paper use increases year on year and has quadrupled over the= past 50 years. In 2014, global paper production hit 400 million tonnes per year for the first time … More than half of this paper is consumed in China (106 million tonnes), the USA (71 million tonnes), and Japan (27 million tonnes), with a further quarter in Europe (92 million tonnes). The entire continent of Africa accounts for just 2% of global paper use, consuming a mere 8 million tonnes per year. Oceania and Latin America between them account for around 8%.

Here\’s a figure and a table showing total paper consumption by region over time Notice that paper consumption in North America has been falling. Paper consumption is near-zero in Africa and not much higher in Latin America and Oceania. It\’s rising fast in Asia, which is in large part a China effect. 
 
Here\’s a figure showing per capita consumption of paper, with North America still leading the way. 
Why has the demise of paper been so slow to arrive? As shown in the table above, one main reason is the growth of paper production in China. The report notes: \”China alone, with its rapid build-up of capacity over the past two decades, has taken over as the leading paper producer, providing more than 25% of the world’s paper.The USA, long the global leader in paper production, moved to second place in 2009.\” This pattern raises the possibility that paper consumption is also likely to rise substantially if and when economic growth proceeds in other parts of the world 
The other reason is that most consumption of paper products isn\’t about newspapers, reports, and other reading material. It\’s packaging. Here\’s a pie chart showing the breakdown of uses of paper in a recent year, and a figure showing the change over time. 

A main concern for the Environmental Paper Network is that paper production is often an environmentally dirty industry. The report notes: 

The pulp and paper manufacturing industry is one of the world’s biggest polluters and must evolve to employ best available technologies and new innovations to clean up its act. The sector is not only the fifth largest consumer of energy, accounting for 4% of all the world’s energy use, but the process of paper uses more water to produce one ton of product than perhaps any other industry. On average 10 litres of water are required to make one A4 sheet of paper – in some cases, it’s as high as 20 litres. The chemically intensive nature of the paper pulping and bleaching process is far from clean. The toxic chemicals used often end up being discharged as effluent into waterways where they pollute rivers, harm eco-systems, bio-accumulate and eventually enter the food chain. Besides carbon emissions, pulp and paper mills also release air pollutants in the form of fine particulate matter (PM2.5), nitrogen and sulphur oxides which can also affect public health. While the industry has made some progress in recent years to operate more sustainably, it has been slow to adopt advances in technology that can deliver higher energy savings and water reductions whilst promoting less toxic production methods.

The report is also careful to note that reductions in paper use don\’t always provide environmental benefits. Using paper towels in a public restroom is probably more environmentally friendly than a hot-air dryer.  Even the move from paper to digital communication can be tricky to convert into environmental pluses and minuses. The report notes: \”Life-cycle assessments of some commodities, for example of books, have compared the energy or climate change costs of paper and electronic alternatives, drawing conclusions about how many e-books need to be read on an e-reader before the unit energy costs are less than the paper option. Few of such studies adequately address the full life-cycle impacts of digital devices, including all the minerals used in their production and post-disposal impacts.\” 

Thus, the challenge in thinking about environmental effects of the paper industry is to focus on uses where the social benefits of paper are relatively low. The report uses the economics language of \”utility\” to discuss this subject, although the term isn\’t quite used in the textbook economics sense:

Some paper applications have considerable social benefits, and therefore high utility. Other applications have either no social benefits, a highly limited lifespan or much more durable alternatives (or more than one of these). They are therefore deemed to be low utility. In surveys of opinion of the utility of different paper applications, the results have assigned high utility to such items as legal papers, passports, money, medical records, toilet paper and books, and low utility to unread magazines, unwanted direct mail (junk mail), excessive packaging and throwaway cups. Reducing use in paper applications that are high volume and low utility can make a big impact, while not causing disadvantage. Excessive packaging, therefore, is an example of a good place to look for efficiencies. Reducing use of paper napkins, on the other hand, being low utility but also relatively low volume, will make less impact, while reducing the use of books, which are fairly high volume but also high utility, could be unpopular and limit the sharing of information by people that have no access to digital devices.

Measuring overall recycling in an economy is hard, and the statistics aren\’t great. But here\’s one set of estimates on the extent of paper recycling. 

Paper has its uses, and some of those uses seem likely to persist and even to rise with global economic growth. The challenge, as usual, is to strike a cleaner balance between economic benefits and environmental costs. 

Interview with William “Sandy” Darity Jr.: Inequality, Race, Stratification, and More

Douglas Clement interviews William \”Sandy\” Darity Jr. in The Region magazine from the Federal Reserve Bank of Minneapolis (June 3, 2019). As the subtitle reads: \”His recent focus has been on reparations for African Americans, but his scholarship spans decades and ranges from imperialism to psychology, from “price-specie flow” to rational expectations.\” Here are a few points that caught my eye, but the entire interview is worth reading.

Wealth Inequality by Race

The racial wealth gap is customarily measured at the median for households to bypass the problems that are created by large outliers. At the median, when we’re taking the middle households, the most recent data from the Survey of Consumer Finances (SCF) for 2016, I believe, places the white household median at $171,000 and the black household median at $17,600. So, essentially, at the median, blacks have 1 cent in wealth to every 10 cents held by whites. [The SCF 2016] probably has the most conservative estimate of the gap. If, for example, you use the Survey of Income and Program Participation from 2014, which I believe is the most recent year that it’s been taken, the ratio is closer to 1 cent for blacks per 13 cents for whites at the median.

In work that our research group has done for the National Asset Scorecard for Communities of Color, we attempted to get data about individual metropolitan areas throughout the United States, where it might be possible to look at the wealth position of very specific national origin groups. All of our cities have much lower estimates of black median wealth than the national statistics. The number of cities that we’ve studied is substantial but hardly comprehensive. It’s been Boston, Los Angeles, Washington, D.C., Miami, Tulsa, and Baltimore. …  That’s what we found. $8 is the median [wealth of black households in Boston]. In Miami, it’s $11. I’m not sure how the national statistics get as high as $17 thousand; it’s not really consistent with what we’re finding. So I’m just not sure. There’s something odd. We consistently found, across all of these cities, much, much lower estimates of median black household wealth than you see in the national data. …

I’m absolutely convinced that the primary factor determining household wealth is the transmission of resources across generations. The conventional view of how you accumulate wealth is through fastidious and deliberate acts of personal saving. I would argue that the capacity to engage in some significant amount of personal saving is really contingent on already having a significant endowment, an endowment that’s independent of what you generate through your own labor.

That being the case, I think that there’s actually some superb research that’s recently come out that supports the importance of what I’d like to call intergenerational transmission effects, rather than intergenerational transfers. I think these effects go beyond inheritances and gifts. I think it includes the sheer economic security that young people can experience being in homes where there is this cushion of wealth. It provides a lack of stress and a greater sense of what your possibilities are in life. … The sociologists Pfeffer and Killewald have done very, very powerful work on the relationship between grandparents’ and parents’ wealth and the wealth of the youngest generation when it’s of adult age. The connection or the correspondence between which households have higher levels of wealth across three generations is pretty strong.

Then there’s the work of two economists who are with the Fed, Feiveson and Sabelhaus. Their work shows that at least 26 percent of the net worth of a person in the current generation is determined by their parents’ wealth. At least 26 percent. And that’s their lower bound. …  And if your family’s wealthy enough, you come out of college or university without any educational debt. That can be a springboard to making it easier for you to accumulate your own level of wealth.

What\’s a \”baby bond\”?

Baby bonds are not really a bond. They’re really a trust account for each newborn infant. It would be different from other types of programs like seed accounts or child savings accounts because no contribution would be expected from parents, whether they’re rich or poor. The amount of the trust account would vary with the wealth position of the child’s family. It would vary on a graduated basis, so we wouldn’t have any kinds of notch effects. That’s basically the idea.

In most of the versions of the proposal that we’ve advanced, we’ve said the federal government is essentially providing a publicly funded trust account to every newborn child, so it’s a birthright endowment. We would guarantee a 1 percent real rate of interest until the account can be accessed by the child when they reach young adulthood. There’s some debate among us about what that young adulthood date should be.

What is \”stratification economics\”?

Stratification economics is an approach that emphasizes relative position rather than absolute position. What’s relevant to relative position are two considerations: one, a person’s perception of how the social group or groups to which they belong have standing vis-à-vis other groups that could be conceived of as being rival groups.

Now, it’s an interesting issue as to who constitutes groups that are viewed as rivalrous or oppositional in some sense. But the first thing that individuals value is a superior position for the groups with which they identify. The second thing that they value is a superior position relative to other members of their own group. … There are two sets of comparisons that are going on: an across-group comparison and a within-group comparison. This kind of frame as the cornerstone for the analysis comes out of, in part, the old work of Thorstein Veblen and also out of research on happiness. The latter increasingly shows that people have a greater degree of happiness if they think that they’re better off than whoever constitutes their comparison group rather than simply being better off; so it’s comparative position that comes into play.

Conventional economics doesn’t start with an analysis that’s anchored on relative position, as opposed to absolute position; so I think that’s the fundamental shift in stratification economics. But also important to stratification economics is the notion that people have group affiliations or group identifications. People feel like they’re part of a team. There can be varying degrees of attachment but, in some sense, people think of themselves as being part of a team, and they want their team to win. That’s somewhat different from conventional economics.

Japan: The Challenges of Aging, Slow Growth, and Government Debt

Japan is the third-largest economy in the world, behind the US and China. It\’e experience seems to foretell some of the key issues facing other high-income economies, like slow productivity growth, rapid aging, and rising government deficits. But in the last few years, it also seems to have recovered to at least a moderate rate of economic growth. What are some of the main patterns and lessons in Japan?  For background, I\’ll draw on the work of theOECD, which just published one of its \”Economic Surveys\” of Japan in April 2019.

Back in the 1980s, a number of popular books and reports published in the US anointed Japan as the future leader of the global economy. A standard claim was that the disorganized competitive market forces of the US economy were unable to keep up with the government-directed cooperative ventures of Japan\’s economy. Then in the early 1990s, Japan\’s economy experienced a meltdown in stock and housing prices, and its economy entered a period of near-zero growth. Here\’s figure comparing Japan\’s in per capita terms to the rest of the OECD countries.  The left-hand set of bars show that when it comes to per capita output, Japan\’s growth was lagging well behind and is now catching up. The right-hand set of bars show how this pattern is linked to an aging population. If one looks only at Japan\’s output relative to its working-age population, it wasn\’t all that far behind from 1997-2012, and has actually been ahead of average OECD growth since 2012.

Japan\’s is facing a situation of a declining population and workforce, and the share of the population that is elderly is on the rise. This rising share of elderly has been driving up government spending on pensions and health care, and together with attempts to stimulate its economy through government spending (much of it on infrastructure), Japan has run up an enormous government debt. In the last few years, it has been aggressively using the Bank of Japan to buy and hold its government debt. Meanwhile, productivity growth has been stagnant. Let\’s say just a bit more about these patterns.

 Here\’s a figure showing Japan\’s total population, broken down by age group. The OECD writes: \”With Japan’s population projected to fall by one-fifth to around 100 million by 2050, many parts of the country are facing depopulation. Efficiency would be increased by expanding the joint provision of local public services, including health and long-term care and infrastructure, across jurisdictions and developing compact cities.\”

Here\’s the change in total population and working-age population from 2000-2018. The working-age population is dropping fast in Japan, near-zero in Germany and Italy. Although it\’s rising in the other countries, the aging of population in these other countries is coming, too.

The combination of slow growth and a declining population has meant ongoing declines in the price of real estate in Japan for most of the last three decades, before stabilizing in the last few years.

Here\’s a figure showing Japan\’s population age 65 and older as a share of the working-age population aged 20-64. The bar shows the level in 2017; the arrow shows where it\’s headed by 2050. Many high-income countries are getting older, but Japan is an extreme case. The OECD writes: \”Half of the children born in Japan in 2007 are expected to live to the age of 107, which has major implications for the labour market. The number of elderly is projected to rise from 50% of the working-age population in 2015 to 79% by 2050 …\”

Supporting the elderly and attempting to stimulate the economy has led to very high levels of government debt in Japan. The OECD writes: \”Twenty-seven consecutive years of budget deficits have driven gross government debt to 226% of GDP in 2018, the highest ever recorded in the OECD area. The government projects that population ageing will boost spending on health and long-term care by 4.7% of GDP by 2060. Measures to ensure the sustainability of Japan’s social insurance programmes, as spending rises and the number of working-age persons falls from 2.0 per elderly to 1.3 by 2050, is a priority.\”

Japan has traditionally had a high savings rate, and in the past, the common pattern was that Japan\’s government debt was mostly funded by the high savings levels of its citizens. However, in the last few years the Bank of Japan has become much more aggressive that other countries in its \”quantitative easing,\” where the central bank essentially prints money to buy government debt. 
All of this is happening against a backdrop of relatively low labor productivity in Japan. This figure compares Japan to countries in the upper half of the OECD nations–that is, those countries that have higher income levels. A common pattern in Japan is that the labor input in Japan is higher than the comparison group, because labor force participation and hours worked in Japan are high. However, the productivity of labor in Japan has been well below the comparison group.  A shrinking labor force and lagging productivity are not a recipe for success. 
So what needs to be done in Japan? Clearly, a main approach has been to try jump-start the economy with large fiscal deficits and aggressively loose monetary policy. While this seems likely to continue, the OECD warns that it\’s not a strategy that can be pursued forever. Ultimately, an economy needs to have the output of its workforce expand–and for this to happen in a situation where the number of people in the workforce is falling. 
One set of approaches is to get more work from the existing workforce. The OECD notes that as life expectancies head toward 100 years and higher, the traditional patterns of retirement need to change. The report says: 

More than 80% of [Japan\’s] firms continue to set mandatory retirement at age 60, even though life expectancy at that age is 26 years, up from 17 in 1970. While workers can continue until age 65, most are re-hired as non-regular workers at significantly lower wages and in jobs that make less use of their skills. The right of firms to set a mandatory retirement age should be abolished to allow more workers to continue their careers, while fully utilising their skills. An end to mandatory retirement requires shifting away from seniority-based wage systems by giving more weight to job category and performance. In addition, the pension eligibility age should be raised above 65, as healthy life expectancy has reached 75. Lengthening careers in the era of 100-year life spans also requires lifelong learning and job-related training to avoid the decline in skill levels among older workers. An end to mandatory retirement would increase firms’ incentives to increase such investment in older workers, which is currently low in Japan. Finally, longer working lives would also be facilitated by better work-life balance for all workers by strictly enforcing the new 360-hour annual limit on overtime hours, imposing adequate penalties on firms that exceed it and introducing a mandatory minimum period of rest between periods of work.

The share of Japanese women in the labor force has risen in recent years, with a push from expanded child care programs. But Japan has long had a \”dual-track\” economy, with one set of workers who have regular work, good pay and benefits, and a career path, and a second track of irregular work, low pay, and little chance for advancement. Women in Japan have often ended up in this second track. The OECD writes:

The employment rate for women has risen sharply over the past five years, from 60.7% in 2012 to 69.6% in 2018, well above the 60.1% OECD average (Table 9). However, half of the new workers are non-regular workers. The working lives of women are interrupted and shortened by the burden of providing care for family members, leaving them under-represented in managerial positions and on boards of directors . … Removing barriers to women requires policies to: i) improve work-life balance by strictly enforcing the new 360-hour annual limit on overtime; ii) further reduce waiting lists for childcare; and iii) attack discrimination, which tends to exclude women from fast-track career
paths. Breaking down labour market dualism is also essential, as women account for two-thirds of non-regular workers, who are paid substantially less.

Of course, pushing back retirement ages and expanding the existing workforce would also help to improve Japan\’s long-run budget picture. But the OECD report emphasizes that other efforts like cost-sharing in health care, means-testing of benefits for the elderly, and various kinds of cost-cutting will also be needed. 

How to get more productivity from Japan\’s workers? This issue has been the heart of Japan\’s long-run problems for decades. Of course, Japan\’s economy has a number of well-known world-class companies at highest level of global competitiveness. But it also lots of small and medium enterprises with much lower productivity. The OECD writes: \”Despite a high level of public support for SMEs [small and medium enterprises], productivity in large firms was 2.5 times higher than in SMEs in FY 2017 in manufacturing, a large gap by international standards …\” Japan\’s service industries lag well behind their international peers in productivity, as well.

Subsidizing small and medium enterprises, as long as they remain small, is not a long-run path to higher productivity. Instead, the dropping Japanese workforce offers a chance for these inefficient firms to be combined, reorganize, managed better, and exposed to greater competition. Many of these companies seem to be in a quirky situation where they complain that they don\’t have enough capacity to produce–but they aren\’t taking the steps and making the investments to push for higher productivity of their existing workforce. The OECD report talks a lot about reforms to corporate governance, so that Japan\’s companies would do less sitting on their piles of cash and more looking for growth and efficiency opportunities. But spreading a more productivity-based mindset across all the companies of Japan, not just the world leaders, isn\’t an easy task. 

Japan has other issues beyond aging, budgets, and productivity. For example, Japan seems likely to bear costs of rising trade disputes involving China and around the world, even if if often isn\’t directly involved in the complaints. But the success which Japan has in addressing its challenges, for better or for worse, will shape how other high-income countries like the US view similar policy choices in the decades ahead. 
For some additional perspective on Japan\’s economy, Tanweer Akram has written \”The Japanese Economy: Stagnation, Recovery, and Challenges,\” in the Journal of Economic Issues (June 2019, pp. 403-410).

Some Snapshots of the Economic Well-Being of U.S. Households

For the last six years, the Federal Reserve has been doing an annual Survey of Household Economics and Decisionmaking, which is designed to be nationally representative of the 18-and-older US population. The most recent survey was carried out in October and November 2018, and the Federal Reserve published the result in \”Report on the Economic Well-Being of U.S. Households in 2018\” in May 2019. Here, I\’ll offer a few tables from the report that especially caught my eye. What was interesting to me is how the survey answers conveyed both a sense that the US economy is going pretty well, but also that many people felt dissatisfied.

For example, \”Three-quarters of adults in 2018 indicate they are either `living comfortably\’ (34 percent) or `doing okay\’ financially (41 percent), similar to the rate in 2017. The rest are either `just getting by\’ (18 percent) or “finding it difficult to get by\’ (7 percent).\” When people are asked how their financial situation compares to their parents at the same age, a majority says \”better,\” with blacks and Hispanics more likely to say \”better\” than whites.

However, even with unemployment rates as low as they have been for a half-century, lots of people would like to work more than they currently are. \”Two in 10 adults are working but say they want to work more hours.\” In other cases, those who are not working may be limited by health, family responsibilities, or an inability to find a suitable job. 

It\’s also plausible that when some people with irregular or temporary jobs are saying that they want \”more\” work, what they are talking about is a job that is more secure and predictable in its hours. About one-third of respondents report that they have done \”gig work\” in the previous month.

In this survey, gig work covers personal service activities, such as child care, house cleaning, or ride-sharing, as well as goods-related activities, such as selling goods online or renting out property. This definition of gig work includes both online and offline activities, underscoring the fact that most of these activities predate the internet. Many adults who engage in gig work use it to supplement their income, but some rely on it for their main source of income. Finally, these gig activities are often done occasionally and do not take much time, and thus may not fit neatly in a standard concept
of what is considered to be “work.”

A substantial share of people have not current on paying their bills–often because of an unpaid credit card balance.

Even without an unexpected expense, 17 percent of adults expected to forgo payment on some of their bills in the month of the survey. Most frequently, this involves not paying, or making a partial payment on, a credit card bill. Four in 10 of those who are not able to pay all their bills (7 percent of all adults) say that their rent, mortgage, or utility bills will be left at least partially unpaid. Another 12 percent of adults would be unable to pay their current month’s bills if they also had an unexpected
$400 expense that they had to pay. 

The survey also asks about the \”unbanked\” and the \”underbanked\”:

Six percent of adults do not have a checking, savings, or money market account (often referred to as the “unbanked”). Two-fifths of unbanked adults used some form of alternative financial service during 2018—such as a money order, check cashing service, pawn shop loan, auto title loan, payday loan, paycheck advance, or tax refund advance.13 In addition, 16 percent of adults are “underbanked”: they have a bank account but  also used an alternative financial service product. The remaining 77 percent of adults are fully banked, with a bank account and no use of alternative financial products.

Lots of people don\’t feel that they are on track for retirement:

There\’s lots more in the report with breakdowns of income, education, student loans, housing, neighborhoods, and so on. But again, the overall sense is that most people feel fairly good about their status, but also have a list of economic worries and insecurities about not enough work, being up against the edge of regular bills, and a sense that they aren\’t preparing for retirement. Of course, some of these worries are just the stuff of life. Some of the worries might be addressed by helping people learn to manage their money better. But the stresses of these uncertainties and insecurities are also real and meaningful in the lives of many people, and they are likely to respond to politicians who acknowledge these concerns and offer promises to address them.

The US-Chinese Trade War: Why Now? At What Cost?

As a person who attempts to avoid rhetorical excess, except in my personal life, I\’ve hesitated to refer to the US-China trade disputes as a \”trade war.\” But it\’s gone past being a skirmish, a tussle, or a melee. It\’s gone beyond a battle, too. For those trying to gain an overall perspective, a useful starting point is Trade War: The Clash of Economic Systems Threatening Global Prosperity, a readable e-book of 11 essays plus and introduction, edited by Meredith Crowley (VoxEU.org, CEPR Press, May 2019, available with free registration).

Here, I want to pass along some of the arguments as to why the US-China trade war has erupted, and what the costs are likely to be. For economists looking at trade issues, the Trump presidency is certainly part of what\’s happening, but it is also operating against a particular economic and institutional backdrop that is worth noticing.  Thus, here are four reasons that have helped lead to teh US-China trade war.

1) The \”China shock\” of 2001

China entered the World Trade Organization in 2001. As Justin R. Pierce and Peter K. Schott point out in their essay, \”The costs of US trade  liberalisation with China have been acute for some workers,\” the US also adopted \”permanent normal trade relations\” with China at the tail end of the Clinton administration in October 2000. Before this change, tariffs on imports from China were low, but these low tariffs needed to be explicitly re-approved by the president on an annual basis, and Congress had power to override the president\’s decision. With these changes, the low tariff rates on imports from China were locked in.

An extraordinary rise in China\’s exports and trade surplus followed–a rise that was not anticipated by either China (in its official five-year plans) or by the US. Here are a couple of figures based on World Bank data to illustrate. The first one shows that China\’s exports of goods and services as a share of GDP had been rising in the 1980s and 1990s, but then absolutely took off in the early 2000s, rising from about 20% of China\’s GDP in 2001 to 34% of China\’s GDP in 2006. Not coincidentally, China\’s trade surplus had been about 1.3% of GDP in 2001, but spiked up to 10% of China\’s GDP by 2006.

As Pierce and Schott emphasize, the China shock hit manufacturing jobs in certain parts of the US specially hard. They write:

The sharp drop in US manufacturing employment after 2000 differs markedly from the more gradual decline in manufacturing employment that occurred during the prior two decades. Indeed, in the 21 years following the peak of US manufacturing employment in 1979 to just before PNTR [permanent normal trade relations], US manufacturing employment fell by 2.3 million (or 12%). In the next four years, from 2000 to 2003, it fell by 2.9 million (or 17%) – a decline that is roughly as large as that experienced in the four years following the onset of the Great Recession.

The figures above also show that the \”China shock\” dramatically diminished about a decade ago, since the end of the Great Recession. A few years ago, China\’s exports and trade surplus had already fallen back to where they were around 2001. But the legacy of that shock has lived on in the communities most affected.

2) The Dominance of the US Economy has Declined

The United States, together with the countries of western Europe, have long been at the forefront of the push for reducing global barriers to trade. However, the primary source of economic growth in the world economy in the 21st century, and looking ahead for the next several decades, is happening in the \”emerging market\” economies. In \”Understanding trade wars,\” Aaditya Mattoo and Robert W. Staiger write that when a single economy dominates the global economy, it can be in the interest of that large single economy to have a rules-based trading system for all countries. But if that large economy loses its dominance, it may prefer to shift to a \”power-based\” system of negotiating tariffs with specific trading partners. They write:

In 1947, the US was the unquestioned hegemon of the world economy and played a central role in the creation of the GATT (Irwin et al. 2011). Below we describe how it can be in the enlightened self-interest of a sufficiently dominant hegemon to provide support for a rules-based system that limits its ability to exercise power; but as the dominance of the hegemon wanes, this support can erode, precipitating the collapse of the rules-based system until another sufficiently dominant hegemon rises to take its place.

One aspect of this insight is that when China was a much smaller economy, several decades back, it didn\’t matter all that much to the overall US economy whether China\’s exports were up or down, or whether some US technology was ending up in the hands of Chinese firms. Having rules to govern the world trading system mattered more. But now that China\’s economy is close to the that of the United States in size (or larger, depending on which exchange rate is used to do the comparison), the US cares a lot more about these specific issues with China, and the advantages over overall rules matter less.

3) The World Trade Organization Rules Seemed Too Weak 

Part of the reason for tariffs and a trade war is that the dispute resolution procedures in the World Trade Organization seemed so weak. Chad Bown discusses this issue in , \”The 2018 trade war and the end of dispute settlement as we knew it\”: \”The idea that WTO dispute settlement was not well-positioned to tackle a suite of Chinese policies whose economic effect was to act against the spirit – if not the legal letter – of WTO rules …\”

For example, the WTO does have rules against countries using clear-cut industrial subsidies to boost their trade surplus. But if a country is providing subsidies through a mixture of cheap credit from state-owned banks and tweaks in its tax code that have the effect of favoring certain industries, it\’s not clear that the WTO dispute process works well. Luca Rubini digs deeper into the problems of how to measure subsidies and  how to  have rules about them in \”The never-ending story: The puzzle of subsidies.\”

Similarly, the WTO has rule against stealing intellectual property. But if a country has requirements for joint ventures between foreign and domestic firms, and antitrust laws just happen to be enforced more against foreign firms, and the net effect of these changes is a high level of pressure for technology transfer to domestic firms, then it\’s not clear that an appeal to the WTO will work.

In addition, the US was in the process of arguing that the WTO appeals process was too strong, and that it was handing down unfair decisions against the US. This made it difficult to simultaneously argue that the WTO appeals process should be strengthened to address trade issues with China.

4) The Trump Administration Stops Dancing Around Tariffs, and Starts Dancing With Them

Politicians of both parties have been dancing around with tariffs and protectionism for several decades now. For example, if one goes back to about 2010, or the anti-globalization protests of the late 1990s, or the arguments over NAFTA in the early 1990s, it\’s easy find politicians of both parties who have been been happy to express very grave reservations about trade, but then would eventually sign on to a compromise and amendment-laden bill reducing barriers to trade.

The Trump administration stopped equivocating about protectionism, and embraced it. President Trump has stated plainly his views that tariffs are good, that trade wars are easy to win, and that if no trade occurs, the US economy wins big. The president\’s trade advisers have said that countries are unlikely to retaliate against US tariffs, but then have had to support subsidies for industries like agriculture that suffered such retaliation.

What are the costs of US-China trade war? It\’s obviously hard to evaluate costs when the trade war is still going on, and perhaps still escalating. Ralph Ossa, in his essay \”The Costs of Trade War,\” writes that a fully escalated trade war will reduce GDP by about 2% in the US, China, and the EU, and by considerably more in many smaller economies (like Mexico, Canada, or Switzerland).

Other essays point out that the longer-term disruptions of production and trade can be quite important. After all, even if the US and China signed an agreement next week or next month that settled all their agreements and reduced tariffs back to 2017 levels, companies around the world will now be put on notice that any global supply chains are at risk. For years or decades into the future, they will be less willing to rely on flows of supplies, products, and innovation across international borders. After building up these global supply chains for decades, disrupting and shutting them down will not be a costless process.  Several of the papers in this issue take on the topic of trade barriers in an era of global value chains, including the papers by Emily J. Blanchard and btYi Huang, Chen Lin, Sibo Liu and Heiwai Tang.

The results of the rise US tariffs have been utterly predictable, like higher prices for consumers for products like steel and washing machines, and relatively few jobs saved at high cost.   The imposition of tariffs has been followed by higher US trade deficits.  But ultimately, a full-fledged trade war is about more than some tariff hikes. At this point, we aren\’t just arguing over details like whether China\’s rules for technology transfer are unfair (which they are).

Instead, as a society we are grappling with bigger questions about whether the US would be better off if it created considerably more separation for itself from the rest of the global economy. And we are arguing over whether the world economy and political system is better off when international economic linkages are rising or falling. I don\’t expect that the Trump administration will learn any lessons here, although the costs of its trade policies to US consumers and firms may at some point force them to back down. However, I\’m intrigued to see whether anti-Trump forces will respond to future advocates of tariffs and trade wars in the same way they have responded to Trump–or whether they only oppose tariffs and protectionism if they are initiated by the Trump administration.

_________________
Here\’s a full Table of Contents for the book:

Introduction
Meredith A. Crowley

Part 1: The origins of the trade conflict

1 The costs of US trade liberalisation with China have been acute for some workers
Justin R. Pierce and Peter K. Schott

2 The 2018 trade war and the end of dispute settlement as we knew it
Chad P. Bown

3 Understanding trade wars
Aaditya Mattoo and Robert W. Staiger

Part 2: The costs of trade wars

4 The costs of a trade war
Ralph Ossa

5 How exporters respond to tariff changes
Doireann Fitzgerald

6 Trade wars in the GVC era
Emily J. Blanchard

7 Supply chain linkages and financial markets: Evaluating the costs of the US-China trade war
Yi Huang, Chen Lin, Sibo Liu and Heiwai Tang

Part 3: The challenges for the world trading system

8 Misdirection and the trade war malediction of 2018: Scaling the US-China bilateral tariff hikes
Simon J. Evenett and Johannes Fritz

9 The never-ending story: The puzzle of subsidies
Luca Rubini

10 The policy uncertainty aftershocks of trade wars and trade tensions
Kyle Handley and Nuno Limao

11 China\’s rise and the growing doubts over trade multilateralism
Mark Wu

Pareidolia: When Correlations are Truly Meaningless

\”Pareidolia\” refers to the common human practice of looking at random outcomes but trying to impose patterns on them. For example, we all know in the logical part of our brain that there are a roughly a kajillion different variables in the world, and so if we look through the possibilities, we will will have a 100% chance of finding some variables that are highly correlated with each other. These correlations will be a matter of pure chance, and they carry no meaning. But when my own brain, and perhaps yours, sees one of these correlations, I can feel my thoughts start searching for a story to explain what looks to my eyes like a connected pattern. 

Here are some examples from Tyler Vigen\’s website, drawn from his 2015 book Spurious Correlations.

Eye-balling these kinds of figures gives you a sense of why these correlations arise. For example, if you have both a right-hand and a left-hand axis, you can set the scales on those figures so that draw the figure so that the starting points and the ending points of the two lines are close to each other–and then the intermediate lines will look fairly common as well.  If comparing to data on a certain statistic in a certain state (divorces in Maine, fishing accidents in Kentucky), your statistical antennae should be warning you that by the time you look through a large group of family or health statistics for each of 50 states, there\’s a reasonable chance of finding whatever pattern you are looking for just by random chance. If you limit the search to relatively short stretches of data like a decade or so, and plug in your computer to sort through the possibilities, finding meaningless correlations isn\’t going to be hard. 

Of course, at the more serious level of academic research, these types of issues can still arise. Imagine that a researcher is trying to look at the effects of a particular large-scale program. The researcher has lots of data to divide people up into groups: by age, work status, family status, geographic location, education, health, race/ethnicity, gender, religion, and more. The researcher also has lots of possible outcomes for these people: income, marriage or divorce, childbearing, health, employment, retirement, and others. If a researcher looks at all the possible subcategories, it will inevitably be true that this program will seem to have major effects in a certain group: for example, the program may be correlated with a big change in the divorce behavior of white people in the 35-54 age bracket with low levels of religious observance in the state of New York.  But if you (or your computer program) scanned through literally thousands of subgroups and possible effects to find this specific correlation, it\’s fair to assume that the correlation is just as meaningless as any of the examples presented by Vigen. 
Classes in statistics emphasize that \”correlation doesn\’t mean causation.\” The lesson here is even stronger. Correlation doesn\’t necessarily mean anything at all.