What\’s Driving the Long-Run Deficit Forecasts?

The headline finding from The 2016 Long-Term Budget Outlook just published by the Congressional Budget Office is that the ratio of federal debt/GDP is projected to rise from its current level of 75% in 2016 to 141% in 2046–which would be the highest level ever for the US economy.

As a starting point, the long-run pattern of federal debt-to-GDP looks like this when looking back over US history and then projecting forward 30 years. Previous peaks for federal debt include World War II, World War II, the Civil War, and the Revolutionary War, as well as rises in debt incurred during the 1930s and the 1980s. But the CBO projectionssuggest that US borrowing isn\’t on a sustainable path.

What driving these estimates? Essentially, the CBO estimate is a status quo projection. It\’s based on current laws,  combined with existing trends for population (like the aging of the population) and a few other estimates (like interest rates and health care costs). Of course, the report also includes how the estimates would be affected by changes in laws and economic parameters. But for the moment, let\’s just focus on the central estimates for spending and taxes, which look like this:

As an overall statement, the CBO projects a large rise in the debt-to-GDP ratio because under current law government spending is projected to rise over time as a share of GDP, while taxes are not. In  the major categories of federal spending shown in the top panel, the two categories with the biggest projected rise over the next few decades are major health care programs and net interest payments. The tax projections are, again, a status quo projection of not much change over time, although individual income taxes rise a bit because (under current law) some taxpayers will be bumped into higher tax brackets over time and because in 2020, taxpayers who are receiving  high-cost health insurance from employers are scheduled to start owing some income tax on some of the value of that insurance.

Of course, projections like these are mutable. As Ebenezer Scrooge says to the Spirit of Christmas Future, before he looks at his own gravestone: “`[A]nswer me one question. Are these the shadows of the things that Will be, or are they shadows of things that May be, only? … Men’s courses will foreshadow certain ends, to which, if persevered in, they must lead,\’ said Scrooge. `But if the courses be departed from, the ends will change. Say it is thus with what you show me!\’\”

Net interest payments are essentially determined by two factors: how much the federal government has borrowed, and what interest rate it needs to pay. The CBO estimate is based on the (real, 10-year) interest rate that the federal government needs to pay hanging at 2%–more-or-less its current level. If interest rates keep falling so that the applicable rate was 1% or less, or started rising to be 3% or more the debt forecast moves considerably.

The level of health-care spending, on the other hand, is at least to some extent determined by the size of the government subsidies for health care through Medicare, Medicaid, the Children\’s Health Insurance Program, the \”marketplace\” health insurance exchanges, and other methods. For example, a previous CBO study found that the federal subsidies to the \”marketplace\” health insurance exchanges will be about $110 billion this year. The share of Medicare spending which is covered by either payroll taxes of workers or premiums paid by the elderly keeps falling, so an ever-larger share of the cost of Medicare is covered by general funds.

If health care spending wasn\’t projected to keep rising, then federal borrowing wouldn\’t climb as much, and interest costs wouldn\’t be as much of a problem, either. In that sense, health care spending is at the heart of the distressing forecasts for where federal borrowing is headed in the long-term.  It\’s not novel to say, but still worth pointing out, that higher health care spending is already crowding out other government   at both the state and federal level. It would be a lot easier to contemplate lasting boosts in spending on education or a cleaner environment or anti-poverty programs if not for the looming specter of rising health care costs.

But in addition, it\’s useful to think about what the CBO budget forecasts leave out. Past sharp rises in the debt-to-GDP ratio have often been associated with war, or with the aftermath of Great Depression or Great Recession. History suggests a reasonable chance that the next 30 years will bring one or both of these. The future may also bring other public priorities, like dealing with what is likely to be a very large expansion in the population of elderly needing long-term care, or rebuilding America\’s 20th century transportation, energy, and communication infrastructure for the 21st century.

The overhanging shadow of rising health care costs influences other policy choices, too. Given that health care spending is already projected to drive federal borrowing to unprecedented levels, further expansions of government health care spending seem less appropriate. If raising taxes mainly just funnels more money to government health care spending, it will be be even less attractive. Given the projections of federal borrowing rising to unprecedented levels, state and federal legislators will be especially tempted by regulatory policies that don\’t impose a direct budgetary cost. The economic and political tradeoffs of high government health care spending are already with us, and are only going to bind more tightly over time.

Financial Stability Reform: Lots of Activity, Not Enough Progress

There has been lots of sound and fury about improving financial regulation in the seven years since the Great Recession ended in 2009. But have the necessary changes been made? In \”Financial Regulatory Reform Afterthe Crisis: An Assessment,\” a paper written for the 2016 European Central Bank Forum on Central Banking held at the end of June, Darrell Duffie basically says \”not yet.\”

Duffie argues that there are four core elements of financial-stability regulation: \”1. Making financial institutions more resilient. 2. Ending “too-big-to-fail.” 3. Making derivatives markets safer. 4. Transforming shadow banking.\” He writes: \”At this point, only the first of these cores element of the reform, `making financial institutions more resilient,\’ can be scored a clear success, although even here much more work remains to be done.\”

On the first goal of making financial institutions more resilient:

\”These resiliency reforms, particularly bank capital regulations, have caused some reduction in secondary market liquidity. While bid-ask spreads and most other standard liquidity metrics suggest that markets are about as liquid for small trades as they have been for a long time,4 liquidity is worse for block-sized trade demands. As a tradeoff for significantly greater financial stability, this is a cost well worth bearing. Meanwhile, markets are continuing to slowly adapt to the reduction of balance-sheet space being made available for market making by bank-affiliated dealers. Even more stringent minimum requirements for capital relative to risk-weighted assets would, in my view, offer additional net social benefits.  I will suggest here, however, that the regulation known as the Leverage Ratio has caused a distortionary reduction in the incentives of banks to intermediate markets for safe assets, especially the government securities repo market, without apparent financial stability benefits.\”

On the second goal of ending \”too-big-too-fail\”:

\”At the threat of failure of a systemically important financial firm, a regulator is supposed to be able to administratively restructure the parent firm’s liabilities so as to allow the key operating subsidiaries to continue providing services to the economy without significant or damaging interruption.  For this to be successful, three key necessary conditions are: (i) the parent firm has enough general unsecured liabilities (not including critical operating liabilities such as deposits) that cancelling these “bail-in” liabilities, or converting them to equity, would leave an adequately capitalized firm, (ii) the failure-resolution process does not trigger the early termination of financial contracts on which the firm and its counterparties rely for stability, and (iii) decisive action by regulators. … [T]he proposed single-point-of-entry method for the failure resolution of systemic financial firms is not yet ready for safe and successful deployment. A key success here, though, is that creditors of banks do appear to have gotten the message that in the future, their claims are much less likely to be bailed out.\”

On the issue of making derivatives markets safer:

\”Derivatives reforms have forced huge amounts of swaps into central counterparties (CCPs), a major success in terms of collateralization and transparency in the swap market. As a result, however, CCPs are now themselves too big to fail. Effective operating plans and procedures for the failure resolution of CCPs have yet to be proposed. While the failure of a large CCP seems a remote possibility, this remoteness is difficult to verify because there is also no generally accepted regulatory framework for conducting CCP stress tests. This represents an undue lack of transparency. Reform of derivatives markets financial-stability regulation has mostly bypassed the market for foreign-exchange derivatives involving the delivery of one currency for another, a huge and systemically important class. Data repositories for the swaps market have not come close to meeting their intended purposes. Here especially, the opportunities of time afforded by the impetus of a severe crisis have not been used well.\”

On the issue of transforming shadow banking:

\”A financial-stability transformation of shadow banking is hampered by the complexity of non-bank financial intermediation and by the patchwork quilt of prudential regulatory coverage of the non-bank financial sector. … The Financial Stability Board (2015) sets out five classes of shadow-banking entities: 1. Entities susceptible to runs, such as certain mutual funds, credit hedge funds, and real-estate funds. 2. Non-bank lenders dependent on short-term funding, such as finance companies, leasing companies, factoring companies, and consumer credit companies. 3. Market intermediaries dependent on short-term funding or on the secured funding of client assets, such as broker-dealers. 4. Companies facilitating credit creation, such as credit insurance companies, financial guarantors, and monoline insurers. 5. Securitisation-based intermediaries. … While progress has been made, the infrastructure of the United States securities financing markets is still not safe and sound. The biggest risk is that of a firesale of securities in the event of the inability of a major broker dealer to roll over its securities financing under repurchase agreements. While the intra-day risk that such a failure poses for the two large tri-partyrepo clearing banks has been dramatically reduced, the U.S. still has no broad repo central counterparty with the liquidity resources necessary to prevent such a firesale. More generally, as emphasized by Baklanova, Copeland, and McCaughrin (2016), there is a need for more comprehensive monitoring of all securities financing transactions, including securities lending agreements.\”

Finally, I was struck by one of Duffie\’s comments in passing about the costs of financial regulation:

\”The costs of implementing and complying with regulation are among the tradeoffs for achieving greater financial stability. For example, in 2013 (even before the full regime of new regulations was in place) the six largest U.S. banks spent an estimated $70.2 billion on regulatory compliance, doubling the $34.7 billion they spent in 2007. Compliance requirements can accelerate or, potentially, decelerate overdue improvements in practices.  The frictional cost of complying with post-crisis regulations is easily exceeded by the total social benefits, but is nevertheless a factor to be considered when designing specific requirements and supervisory regimes.\”

Appropriate financial regulation is an admittedly difficult policy problem. Still, it\’s disconcerting that seven years after the end of the Great Recession, some obvious gaps and concerns remain–and of course, the concerns that we haven\’t been able to anticipate remain as well.

When Technology Alters Jobs, but Doesn\’t Replace Them

Sometimes technology does nearly eliminate certain categories of jobs: for example, I was watching the 1958 movie Auntie Mame last week, in which the fabulous Rosalind Russell–portraying a character from the early 1930s–has a short comedic take on being a switchboard operator at a law firm. I had to explain to my teenagers what she was doing, and that such a job used to exist. But it is more common for technology to alter jobs, rather than to eliminate them.

Michael Chui, James Manyika, and Mehdi Miremadi have been exploring what jobs are likely to be altered more or less by technology. They present some results in \”Where machines could replace humans—and where they can’t (yet)\” in the July 2016 issue of the McKinsey Quarterly. They are working with data from the US Department of Labor, through which they have a list of 800 occupations and 2,000 tasks that are performed in the context of those occupations. By estimated which tasks are most likely to be automated, they can figure out which occupations are most likely to be altered substantially by new technology. I\’ll start here with a quick overview of their findings, and then offer some more nuances thoughts.

The columns of this figure show six activities that are (broadly) involved in many jobs. The rows show job categories. The size of the circles shows what share of time on the job is spent in each activity. The color of the circle shows how easy it is, within that  to automate that activity. Thus, the first row shows that in food service, a large share of time is spent on tasks \”predictable physical tasks\” that are fairly easy to automate. Indeed, one minor surprise of these findings is that \”accommodations and food service\” jobs, rather than manufacturing, have the highest technical potential for automation.

Here are a few more detailed insights:

1) Just because part of a job is automated doesn\’t mean that the number of workers in that job necessarily declines. I posted about a year ago on the example of \”ATMs and a Rising Number of Bank Tellers\” (March 3, 2015) about how the dramatic rise in automatic teller machines has been accompanied by a rising number of bank tellers–although the job of \”bank teller\” has also evolved during this time. The McKinsey researchers offer another example. How would the deployment of bar-code scanners affect the number of cashiers? I would have guessed that their number would fall, and I would have been wrong. The authors write:

\”Even when machines do take over some human activities in an occupation, this does not necessarily spell the end of the jobs in that line of work. On the contrary, their number at times increases in occupations that have been partly automated, because overall demand for their remaining activities has continued to grow. For example, the large-scale deployment of bar-code scanners and associated point-of-sale systems in the United States in the 1980s reduced labor costs per store by an estimated 4.5 percent and the cost of the groceries consumers bought by 1.4 percent. It also enabled a number of innovations, including increased promotions. But cashiers were still needed; in fact, their employment grew at an average rate of more than 2 percent between 1980 and 2013.\”

2) In a number of cases the question isn\’t about whether a certain task can be automated, but whether the task happens in a repetitive and predictable context, or in a flexible context.  They write: \”Within manufacturing, 90 percent of what welders, cutters, solderers, and brazers do, for example, has the technical potential for automation, but for customer-service representatives that feasibility is below 30 percent.\”

3) Automation isn\’t just about physical jobs that can be automated by robots. A large tasks performed by well-paid white-collar workers that involve collecting and processing data are vulnerable, too.

Across all occupations in the US economy, one-third of the time spent in the workplace involves collecting and processing data. Both activities have a technical potential for automation exceeding 60 percent. Long ago, many companies automated activities such as administering procurement, processing payrolls, calculating material-resource needs, generating invoices, and using bar codes to track flows of materials. But as technology progresses, computers are helping to increase the scale and quality of these activities. For example, a number of companies now offer solutions that automate entering paper and PDF invoices into computer systems or even processing loan applications. And it’s not just entry-level workers or low-wage clerks who collect and process data; people whose annual incomes exceed $200,000 spend some 31 percent of their time doing those things, as well.

4) Just because it\’s technically feasible for certain tasks to be automated doesn\’t mean they necessarily will be automated.

\”Technical feasibility is a necessary precondition for automation, but not a complete predictor that an activity will be automated. A second factor to consider is the cost of developing and deploying both the hardware and the software for automation. The cost of labor and related supply-and-demand dynamics represent a third factor: if workers are in abundant supply and significantly less expensive than automation, this could be a decisive argument against it. A fourth factor to consider is the benefits beyond labor substitution, including higher levels of output, better quality, and fewer errors. These are often larger than those of reducing labor costs. Regulatory and social-acceptance issues, such as the degree to which machines are acceptable in any particular setting, must also be weighed. A robot may, in theory, be able to replace some of the functions of a nurse, for example. But for now, the prospect that this might actually happen in a highly visible way could prove unpalatable for many patients, who expect human contact. The potential for automation to take hold in a sector or occupation reflects a subtle interplay between these factors and the trade-offs among them.\”

My own job as Managing Editor of the Journal of Economic Perspectives has been dramatically affected by technology over the years. When the journal first started in 1986, we had what was then a very innovative idea: authors would mail us floppy disks with the text of their papers. I would edit the actual paper, and mail it back to the authors to edit further. We would then mail the paper to the typesetter on the floppy disk. At the time, this was red-hot newfangled stuff! The task of hands-on editing remains very similar to 30 years ago, but there are lots of dramatic changes. The ways in which we communicate with authors have been fundamentally changed by email, attachments, shared mailboxes on the cloud, and easy conference calls. The tasks of looking up past articles and checking references used to require trips to the library, and are now done casually without leaving my desk. The distribution of the journal used to be all-paper, and then available online by subscription, and then with individual articles freely available online, and now with entire issues that can be freely downloaded and read on a tablet or a smartphone.

Most jobs will be altered by technology. And most of us find that even as technology replaces certain tasks, it creates the possibilities for new tasks that could not previously be done–or at least couldn\’t be done very cheaply or easily. This continual updating of jobs is one of the prices we pay for prosperity.

Carbon Capture and Storage: No Stone Unturned

Technologies for carbon capture and storage often don\’t garner much political support. Those who think rising levels of carbon in the atmosphere aren\’t much of a problem see little purpose for investments in technology to capture that carbon. Many of those who do think rising carbon emissions are a problem are emotionally wedded to a particular solution–reducing use of fossil fuels and growth of solar and wind power, combined with better batteries–and they sometimes view carbon capture and storage as an excuse to continue the use of fossil fuels. My own belief is that the risks of climate change (and other environmental costs of fossil fuel use) aren\’t likely to have one silver-bullet answer, and that all options are worth research and exploration, including not just non-carbon and low-carbon energy sources, but also energy conservation efforts and geoengineering, along with carbon capture and storage.

Back in 2005, the Intergovernmental Panel on Climate Change published one of its doorstop tomes called Carbon Dioxide Capture and Storage, summarizing what what known at the time. Here\’s a sense of the tone of the report, emphasizing both the potential of carbon capture and storage (CCS) and the uncertainties about realizing that potential (footnotes deleted for readability):

In most scenarios for stabilization of atmospheric greenhouse gas concentrations between 450 and 750 ppmv COand in a least-cost portfolio of mitigation options, the economic potential of CCS would amount to 220– 2,200 GtCO2 (60–600 GtC) cumulatively, which would mean that CCS contributes 15–55% to the cumulative mitigation effort worldwide until 2100, averaged over a range of baseline scenarios. It is likely that the technical potential for geological storage is sufficient to cover the high end of the economic potential range, but for specific regions, this may not be true. Uncertainties in these economic potential estimates are significant. For CCS to achieve such an economic potential, several hundreds to thousands of COcapture systems would need to be installed over the coming century, each capturing some 1–5 MtCOper year. The actual implementation of CCS, as for other mitigation options, is likely to be lower than the economic potential due to factors such as environmental impacts, risks of leakage and the lack of a clear legal framework or public acceptance …

How has CCS evolved since then? The underlying idea here is to consider installing carbon capture technology at industrial or other facilities which use a lot of fossil fuels and where carbon emissions are especially high. The technology doesn\’t eliminate such industrial emissions, but holds some promise for reducing them substantially. The more recent estimates for potential of CCS seem to be at the very low end of what the IPCC discussed back in 2005. For example, the International Energy Agency produced a 2015 report called Carbon Capture and Storage:The solution for deep emissions reductions. When the title refers to \”the solution,\” it dramatically oversells the actual content of the report. The conclusions are much  more measured, focusing on CCS as a contributor to reducing carbon emissions in specific industrial settings that lack cost-effective alternatives to fossil fuels:

According to International Energy Agency (IEA) modelling, CCS could deliver 13% of the cumulative emissions reductions needed by 2050 to limit the global increase in temperature to 2°C (IEA 2DS). This represents the capture and storage of around 6 billion tonnes (Bt) of CO2 emissions per year in 2050, nearly triple India’s energy sector emissions today. Half of this captured CO2 in the 2DS would come from industrial sectors, where there are currently limited or no alternatives for achieving deep emission reductions. While there are alternatives to CCS in power generation, delaying or abandoning CCS in the sector would increase the investment required by 40% or more in the 2DS, and may place untenable and unrealistic demands on other low emission technology options.

The Global CCS Institute keeps track of the projects that are actually underway and offers a summary in its report The Global Status of CCS 2015. The report counts seven large-scale CCS projects (that is, not counting pilot or research-level projects) operating globally in 2010, 15 operating in 2016, and 22 expected to be operating by 2020. For example, one of the large-scale projects in 2015 is the Quest project being operated by Shell Oil in Canada. As the report notes: \”Launched in Alberta, Canada in November 2015, the Quest project is capable of capturing approximately 1 Mtpa of CO2 from the manufacture of hydrogen for upgrading bitumen into synthetic crude oil. Quest is the first large-scale CCS project in North America to store CO2 exclusively in a deep saline formation, and the first to do so globally since the Snøhvit CO2 Storage Project became operational in Norway in 2008. A case study prepared by Shell documenting key learnings from the development of Quest is available here.\” Saudi Arabia also started operating a large-scale CCS project, the first one in the Middle East region, in mid-2015.

Although the biggest effect of CCS technology in the near-term is likely to be focused on these kinds of industrial applications, there\’s also an intriguing possibility that it can do more through what has become known as BECCS–that is, Bio-Energy and Carbon Capture and Storage. Imagine an energy-generating facility with CCS technology that burns biomass–that is, fuel developed from waste materials produced by forestry, agriculture, and perhaps other sources. Biomass is a renewable resource: in effect, it captures carbon from the atmosphere. If that carbon is captured and stored, and then more biomass is created, and the carbon from that biomass is captured and stored in turn, and so on–the result is a source of energy with negative overall carbon emissions. For discussion, here\’s a boosterish 2012 report called Biomass with CO2 Capture and Storage (Bio-CCS): The Way Forward for Europe, produced on behalf of the European Biofuels Technology Platform and the Zero Emissions Platform. The IPCC has viewed this possibility as worth mentioning, too: for example, its 5th Assessment report in 2014 has comments like: \”“Many models could not limit likely warming to below 2°C if bioenergy, CCS and their combination (BECCS) are limited.”

The large-scale CCS projects now underway will tell us a lot about the costs and effectiveness of the technology in reducing carbon emissions in the next few years. If the feedback seems favorable, then bio-energy with CCS is a likely next step.

Chronic Student Absenteeism

The US Department of Education is starting this summer to release detailed results from a survey effort called  Civil Rights Data Collection, which collected an array of data from almost every public school in the country for the 2013-14 school year. One result is a short e-report on \”Chronic Absenteeism in the Nation\’s Schools.\” The report defines \”chronic absenteeism\” as missing at least 15 school days in a given year. Nationally, about one in eight students are chronically absent. But for non-white, non-Asian high school students, the average rates of chronic absenteeism are above 20 percent. Here\’s a figure showing rates of chronic absenteeism by race/ethnicity and by level of school.

Chronic absenteeism is not surprisingly associated with lower school performance, both for the individual student and for schools where these rates are especially high. Of course, this correlation doesn\’t mean that being absent, by itself, is the main causal factor. One suspects that students and schools with high rates of chronic absenteeism face a lot of other issues, and that absenteeism is a symptom of those broader issues. In that sense, chronic student absenteeism is a marker for a set of problems that K-12 schools face, where the school itself can\’t directly do much about the underlying causes of many of those problems.

Update on the Income Share of the 1%

Emmanuel Saez has just made available his latest update on the share of income received by those at the very top of the income distribution. It\’s in a short working paper called \”Striking it Richer:The Evolution of Top Incomes in the United States (Updated with 2015 preliminary estimates)\” June 30, 2016). Saez offers a brief overview of the findings and a link to the underlying paper at the blog of the Washington Center for Equitable Growth.

Just to be clear, what\’s being measured here is \”cash market income\”–that is, income received before taxes are paid and not counting government transfer payments. As Saez writes: \”We define income as the sum of all income components reported on tax returns (wages and salaries, pensions received, profits from businesses, capital income such as dividends, interest, or rents, and realized capital gains) before individual income taxes. We exclude government transfers such as Social Security retirement benefits or unemployment compensation benefits from our income definition. Non-taxable fringe benefits such as employer provided health insurance is also excluded from our income definition. Therefore, our income measure is defined as cash market income before individual income taxes.\”

Here\’s a figure showing the share of cash market income going to the top 1% of the income distribution (that is above $443,000 of income in 2015), to the 95-99th percentile of the income distribution (income from $180,500 to $443,000 in income in 2015), and to the 90-95th percentile  of the income distribution (that is, $124,800 to $180,500 in 2015). The overall pattern is U-shaped for all three lines. But for the bottom two lines, the U is relatively flat; for the top 1%, the U is a steeper fall and a steeper rise during the last century or so.

Here\’s a figure showing the share of total cash market income top .01% of the income distribution (in 2015, the 16,500 families with more than $11.3 million in income).

I won\’t rehearse here all  of the arguments about the reasons behind this change. But I do think it\’s noteworthy that the share of cash market income received by the top 1% and the top 0.1% hit more-or-less the current level back in 1998, and since then has bobbed up and down around that level. This pattern suggests to me that at least some of the reason for the dramatic rise in incomes at the very top was associated with the combination of more stock options and the dot-com stock market boom of the 1990s, as I argue in \”Stock Options: A Theory of Compensation and Income Inequality at the Top\” (March 25, 2016).

World Drug Report 2016

The United Nations Office on Drugs and Crime has produced the World Drug Report 2016. The report is inevitably a bit of a disappointment to economists, because if focuses pretty heavily on quantities of drugs, with only occasional and peripheral mentions of prices. Of course, prices of illegal drugs would need to be estimated in many countries, but one suspects that law enforcement agencies could offer some insights here. Without specific prices, there can\’t be any effort, for example, to estimate the revenues generated by illegal drugs in various countries.

But with that shortcoming duly noted, the report does offer some useful perspectives. The focus of he report is on nonmedical use of \”opiates, cocaine, cannabis, amphetamine-type stimulants (ATS) and new psychoactive substances (NPS).\” About 5% of the world population between the ages of 15-64–roughly 250 million people–used drugs in this way in the last year, and roughly one-tenth of that number might be classified as having a \”drug problem.\”

\”Cannabis remains the most commonly used drug at the global level, with an estimated 183 million people having used the drug in 2014, while amphetamines remain the second most commonly used drug. With an estimated 33 million users, the use of opiates and prescription opioids is less common, but opioids remain major drugs of potential harm and health consequences.\”  The estimates show over 200,000 drug-related deaths in 2014. \”Overdose deaths contribute to between roughly a third and a half of all drug-related deaths, which are attributable in most cases to opioids.\”

One of the major changes in US drug use in the last decade or so has been the rise in the death rate from precription opioids and heroin.

Given the move toward decriminalization and legalization of cannibis in some US states (and in other places around the world), the report points out that there has also been a global rise (almost entirely occurring outside the US) in the number of people seeking treatment for cannibis-related drug problems. The report has some specific discussion of the US experience to date with legalizing marijuana in a few states. Perhaps not surprisingly, in the states where cannabis use has been legalized, cannabis use is up in the age bracket from 18-25 , and so are auto accidents involving cannabis–although neither increase seems especially large so far.

The report describes the health issues of cannabis use in this way (footnotes omitted):

\”The nature and extent of the potential health risks and harms associated with cannabis use are continually under debate. Cannabis use can be perceived to be relatively harmless when compared with the use of other controlled psychoactive substances and also in relation to the use of tobacco or alcohol. However, lower risk does not mean no risk: there are harmful health effects associated with a higher frequency of cannabis use and initiation at a very young age, especially among adolescents during the time of their cognitive and emotional development.  Adverse health effects of cannabis use associated with cognitive impairments or psychiatric symptoms are well documented
in the scientific literature. Hence, cannabis use disorders require clinically significant treatment interventions.

\”The transition from drug use to drug dependence occurs for a much smaller proportion of cannabis users than for opioid, amphetamine or cocaine users. However, because so many people use cannabis, this translates into a large number who experience cannabis use disorders; for example, in the United States, of the 22.2 million current cannabis users in 2014, 4.2 million people aged 12 or older had a cannabis use disorder diagnosed in the previous year. Cannabis use disorders are estimated to occur in approximately 1 out of every 11 persons (9 per cent) who have ever used cannabis, and the proportion increases significantly to one out of every six persons (17 per cent) who started using cannabis in their teens and to 25-50 per cent of daily cannabis users. …

In the United States, the number of daily (or near-daily) cannabis users, measured by the number using cannabis on 20 or more days in the past month and the number using cannabis on 300 or more days in the past year, rose significantly after 2006, by 58 and 74 per cent, respectively. However, this increase in daily (or near-daily) cannabis use has not translated into an increased number of people seeking treatment, even when those in treatment referred by the criminal justice system are excluded. 

Finally, some interesting shifts in global drug markets appear to be underway. One is the shift from agriculture to manufacturing–that is, in the direction of amphetamines and psychoactive substances like Ecstasy. Another is a shift toward being able to purchase drugs on largely anonymous internet sites sometimes called the \”dark\” internet, using anonymous methods of payment like Bitcoin. For example, here\’s a figure showing how seizures of various drugs have changed in the lat 15 years: clearly, amphetamines are way up.

And here\’s some commentary from the report on buying drugs over the dark net.

\”The purchasing of drugs via the Internet, particularly the “dark net”, may have increased in recent years. This trend raises concerns in terms of the potential of the “dark net” to attract new populations of users by facilitating access to drugs in a setting that, although illegal, allows users to avoid direct contact with criminals and law enforcement authorities. As the “dark net” cannot be accessed through traditional web searches, buyers and sellers access it through the “Onion Router” (TOR) to ensure that their identities remain concealed. Products are typically paid for in bitcoins or in other crypto-currencies and are most often delivered via postal services. A number of successful law enforcement operations worldwide have taken place in recent years to shut down trading platforms on the “dark net”, such as “Silk Road” in October 2013 or “Silk Road 2.0” in November 2014 … However, as one marketplace closes, the next most credible marketplace tends to absorb the bulk of the displaced business. 

\”A global survey of more than 100,000 Internet users (three quarters of whom had taken illegal drugs) in 50 countries in late 2014 suggested that the proportion of drug users purchasing drugs via the Internet had increased from 1.2 per cent in 2000 to 4.9 per cent in 2009, 16.4 per cent in 2013 and 25.3 per cent in 2014. The proportion of Internet users making use of the “dark net” for drug purchases had also increased, reaching 6.4 per cent (lifetime) in 2014, including 4.5 per cent (70 per cent of 6.4 per cent) who had purchased drugs over the “dark net” in the previous 12 months (ranging from less than 1 per cent to 18 per cent).\”

For an earlier post on drug policy, see \”Putting Drug Policy Tradeoffs on the Table\” (April 14, 2016).

Trying to Envision the Original United States

Henry Adams\’s History of the United States, which for many decades was commonly acclaimed as one of the great works of history ever written, was published in nine volumes from 1889-1891. I\’ve only read fractions of it, here and there, over the years. But the first chapter of volume I, called \”Physical and Economical Conditions,\” has always struck me as one of the finest compact descriptions of the United States in those decades along those dimensions just after it had become a nation. In commemoration of July 4, here\’s are some snippets from how Henry Adams described the physical and economic condition of the United States circa 1800. 

\”According to the census of 1800, the United States of America contained 5,308,483 persons. In the same year the British Islands contained upwards of fifteen millions; the French Republic, more than twenty-seven millions.  … Even after two centuries of struggle the land was still untamed; forest covered every portion, except here and there a strip of cultivated soil; the minerals lay undisturbed in their rocky beds, and more than two thirds of the people clung to the seaboard within fifty miles of tide-water, where alone the wants of civilized life could be supplied. The centre of population rested within eighteen miles of Baltimore, north and east of Washington. … 

With the exception that half a million people had crossed the Alleghanies and were struggling with difficulties all their own, in an isolation like that of Jutes and Angles in the fifth century, America, so far as concerned physical problems, had changed little in fifty years. The old landmarks remained nearly where they stood before. The same bad roads and difficult rivers, connecting the same small towns, stretched into the same forests in 1800 as when the armies of Braddock and Amherst pierced the western and northern wilderness, except that these roads extended a few miles farther from the seacoast. Nature was rather man\’s master than his servant, and the five million Americans struggling with the untamed continent seemed hardly more competent to their task than the beavers and buffalo which had for countless generations made bridges and roads of their own.

Even by water, along the seaboard, communication was as slow and almost as irregular as in colonial times … No regular packet plied between New York and Albany. Passengers waited till a sloop was advertised to sail; they provided their own bedding and supplies; and within the nineteenth century Captain Elias Bunker won much fame by building the sloop \”Experiment,\” of one hundred and ten tons, to start regularly on a fixed day for Albany, for the convenience of passengers only, supplying beds, wine, and provisions for the voyage of one hundred and fifty miles.  …

Even the lightly equipped traveller found a short journey no slight effort. Between Boston and New York was a tolerable highway, along which, thrice a week, light stage-coaches carried passengers and the mail, in three days. From New York a stage-coach started every week-day for Philadelphia, consuming the greater part of two days in the journey; and the road between Paulus Hook, the modern Jersey City and Hackensack, was declared by the newspapers in 1802 to be as bad as any other part of the route between Maine and Georgia. South of Philadelphia the road was tolerable as far as Baltimore, but between Baltimore and the new city of Washington it meandered through forests; the driver chose the track which seemed least dangerous, and rejoiced if in wet seasons he reached Washington without miring or upsetting his wagon. In the Northern States, four mile an hour was the average speed for any coach between Bangor and Baltimore. Beyond the Potomac the roads became steadily worse …  Except for a stage-coach which plied between Charleston and Savannah, no public conveyance of any kind was mentioned in the three southernmost States.

The stage-coach was itself a rude conveyance, of a kind still familiar to experienced travellers. Twelve persons, crowded into one wagon, were jolted over rough roads, their bags and parcels, thrust inside, cramping their legs, while they were protected from the heat and dust of mid-summer and the intense cold and driving snow of winter only by leather flaps buttoned to the roof and sides. In fine, dry weather this mode of travel was not unpleasant, when compared with the heavy vehicles of Europe and the hard English turnpikes; but when spring rains drew the frost from the ground the roads became nearly impassable, and in winter, when the rivers froze, a serious peril was added, for the Susquehanna or the North River at Paulus Hook must be crossed in an open boat,—an affair of hours at best, sometimes leading to fatal accidents. …

In the Southern States the difficulties and perils of travel were so great as to form a barrier almost insuperable. Even Virginia was no exception to this rule. At each interval of a few miles the horseman found himself stopped by a river, liable to sudden freshets, and rarely bridged. Jefferson in his frequent journeys between Monticello and Washington was happy to reach the end of the hundred miles without some vexatious delay. \”Of eight rivers between here and Washington,\” he wrote to his Attorney-General in 1801, \”five have neither bridges nor boats.\” …

Commerce between one State and another, or even between the seaboard and the interior of the same State, was scarcely possible on any large scale unless navigable water connected them. Except the great highway to Pittsburg, no road served as a channel of commerce between different regions of the country. In this respect New England east of the Connecticut was as independent of New York as both were independent of Virginia, and as Virginia in her turn was independent of Georgia and South Carolina. The chief value of inter-State communication by land rested in the postal system; but the post furnished another illustration of the difficulties which barred progress. In the year 1800 one general mail-route extended from Portland in Maine to Louisville in Georgia, the time required for the trip being twenty days. … Thus more than twenty thousand miles of post-road, with nine hundred post-offices, proved the vastness of the country and the smallness of the result; for the gross receipts for postage in the year ending Oct. 1, 1801, were only $320,000.

Throughout the land the eighteenth century ruled supreme. … The distance from New York to the Mississippi was about one thousand miles; from Washington to the extreme southwestern military post, below Natchez, was about twelve hundred. Scarcely a portion of western Europe was three hundred miles distant from some sea, but a width of three hundred miles was hardly more than an outskirt of the United States. No civilized country had yet been required to deal with physical difficulties so serious, nor did experience warrant conviction that such difficulties could be overcome.

If the physical task which lay before the American people had advanced but a short way toward completion, little more change could be seen in the economical conditions of American life. The man who in the year 1800 ventured to hope for a new era in the coming century, could lay his hand on no statistics that silenced doubt. The machinery of production showed no radical difference from that familiar to ages long past. The Saxon farmer of the eighth century enjoyed most of the comforts known to Saxon farmers in the eighteenth. The eorls and ceorls of Offa and Ecgbert could not read or write, and did not receive a weekly newspaper with such information as newspapers in that age could supply; yet neither their houses, their clothing, their food and drink, their agricultural tools and methods, their stock, nor their habits were so greatly altered or improved by time that they would have found much difficulty in accommodating their lives to that of their descendants in the eighteenth century. In this respect America was backward. 

Fifty or a hundred miles inland more than half the houses were log-cabins, which might or might not enjoy the luxury of a glass window. Throughout the South and West houses showed little attempt at luxury; but even in New England the ordinary farmhouse was hardly so well built, so spacious, or so warm as that of a well-to-do contemporary of Charlemagne. The cloth which the farmer\’s family wore was still homespun. The hats were manufactured by the village hatter; the clothes were cut and made at home; the shirts, socks, and nearly every other article of dress were also home-made. Hence came a marked air of rusticity which distinguished country from town,—awkward shapes of hat, coat, and trousers, which gave to the Yankee caricature those typical traits that soon disappeared almost as completely as coats of mail and steel head-pieces. The plough was rude and clumsy; the sickle as old as Tubal Cain, and even the cradle not in general use; the flail was unchanged since the Aryan exodus; in Virginia, grain was still commonly trodden out by horses. Enterprising gentlemen-farmers introduced threshing-machines and invented scientific ploughs; but these were novelties. Stock was as a rule not only unimproved, but ill cared for. The swine ran loose; the cattle were left to feed on what pasture they could find, and even in New England were not housed until the severest frosts, on the excuse that exposure hardened them. …  

Except among the best farmers, drainage, manures, and rotation of crops were ncommon. The ordinary cultivator planted his corn as his father had planted it, sowing as much rye to the acre, using the same number of oxen to plough, and getting in his crops on the same day. He was even known to remove his barn on account of the manure accumulated round it, although the New England soil was never so rich as to warrant neglect to enrich it. The money for which he sold his wheat and chickens was of the Old World; he reckoned in shillings or pistareens, and rarely handled an American coin more valuable than a large copper cent. …

Of the whole United States New England claimed to be the most civilized province, yet New England was a region in which life had yet gained few charms of sense and few advantages over its rivals. … The houses were thin wooden buildings, not well suited to the climate; the churches were unwarmed; the clothing was poor; sanitary laws were few, and a bathroom or a soil-pipe was unknown. Consumption, typhoid, scarlet fever, diphtheria, and rheumatic fevers were common; habits of drinking were still a scourge in every family, and dyspepsia destroyed more victims than were consumed by drink. Population increased slowly, as though the conditions of life were more than usually hard. …   In appearance, Boston resembled an English market-town, of a kind even then old-fashioned. The footways or sidewalks were paved, like the crooked and narrow streets, with round cobblestones, and were divided from the carriage way only by posts and a gutter. The streets were almost unlighted at night, a few oil-lamps rendering the darkness more visible and the rough pavement rougher. Police hardly existed. The system of taxation was defective. The town was managed by selectmen, the elected instruments of town-meetings whose jealousy of granting power was even greater than their objection to spending money, and whose hostility to city government was not to be overcome. Although on all sides increase of ease and comfort was evident, and roads, canals, and new buildings, public and private, were already in course of construction on a scale before unknown, yet in spite of more than a century and a half of incessant industry, intelligent labor, and pinching economy Boston and New England were still poor. …

 The State of New York was little in advance of Massachusetts and Maine. …  New York was still a frontier State, and although the city was European in its age and habits, travellers needed to go few miles from the Hudson in order to find a wilderness like that of Ohio and Tennessee. In most material respects the State was behind New England; outside the city was to be seen less wealth and less appearance of comfort.  … [I]f Boston resembled an old-fashioned English market-town, New York was like a foreign seaport, badly paved, undrained, and as foul as a town surrounded by the tides could be. Although the Manhattan Company was laying wooden pipes for a water supply, no sanitary regulations were enforced, and every few years—as in 1798 and 1803—yellow fever swept away crowds of victims, and drove the rest of the population, panic stricken, into the highlands. No day-police existed; constables were still officers of the courts; the night-police consisted of two captains, two deputies, and seventy-two men. The estimate for the city\’s expenses in 1800 amounted to $130,000. One marked advantage New York enjoyed over Boston, in the possession of a city government able to introduce reforms. Thus, although still mediæval in regard to drainage and cleanliness, the town had taken advantage of recurring fires to rebuild some of the streets with brick sidewalks and curbstones. Travellers dwelt much on this improvement, which only New York and Philadelphia had yet adopted, and Europeans agreed that both had the air of true cities: that while Boston was the Bristol of America, New York was the Liverpool, and Philadelphia the London. … 

As a rule American capital was absorbed in shipping or agriculture, whence it could not be suddenly withdrawn. No stock-exchange existed, and no broker exclusively engaged in stock-jobbing, for there were few stocks. The national debt, of about eighty millions, was held abroad, or as a permanent investment at home. States and municipalities had not learned to borrow. Except for a few banks and insurance offices, turnpikes, bridges, canals, and land-companies, neither bonds nor stocks were known. …

As a national capital New York made no claim to consideration. If Bostonians for a moment forgot their town-meetings, or if Virginians overcame their dislike for cities and pavements, they visited and admired, not New York, but Philadelphia. \”Philadelphia,\” wrote the Duc de Liancourt, \”is not only the finest city in the United States, but may be deemed one of the most beautiful cities in the world.\” In truth, it surpassed any of its size on either side of the Atlantic for most of the comforts and some of the elegancies of life. While Boston contained twenty-five thousand inhabitants and New York sixty thousand, the census of 1800 showed that Philadelphia was about the size of Liverpool—a city of seventy thousand people. The repeated ravages of yellow fever roused there a regard for sanitary precautions and cleanliness; the city, well paved and partly drained, was supplied with water in wooden pipes, and was the best-lighted town in America; its market was a model, and its jail was intended also for a model—although the first experiment proved unsuccessful, because the prisoners went mad or idiotic in solitary confinement. In and about the city flourished industries considerable for the time. The iron-works were already important; paper and gunpowder, pleasure carriages and many other manufactures, were produced on a larger scale than elsewhere in the Union. Philadelphia held the seat of government until July, 1800, and continued to hold the Bank of the United States … Public spirit was more active in Pennsylvania than in New York. More roads and canals were building; a new turnpike ran from Philadelphia to Lancaster, and the great highway to Pittsburg was a more important artery of national life than was controlled by any other State. … 

The city of Washington, rising in a solitude on the banks of the Potomac, was a symbol of American nationality in the Southern States. The contrast between the immensity of the task and the paucity of means seemed to challenge suspicion that the nation itself was a magnificent scheme like the federal city, which could show only a few log-cabins and negro quarters where the plan provided for the traffic of London and the elegance of Versailles. When in the summer of 1800 the government was transferred to what was regarded by most persons as a fever-stricken morass, the half-finished White House stood in a naked field overlooking the Potomac, with two awkward Department buildings near it, a single row of brick houses and a few isolated dwellings within sight, and nothing more; until across a swamp, a mile and a half away, the shapeless, unfinished Capitol was seen, two wings without a body, ambitious enough in design to make more grotesque the nature of its surroundings. The conception proved that the United States understood the vastness of their task, and were willing to stake something on their faith in it. Never did hermit or saint condemn himself to solitude more consciously than Congress and the Executive in removing the government from Philadelphia to Washington: the discontented men clustered together in eight or ten boarding-houses as near as possible to the Capitol, and there lived, like a convent of monks, with no other amusement or occupation than that of going from their lodgings to the Chambers and back again. Even private wealth could do little to improve their situation, for there was nothing which wealth could buy; there were in Washington no shops or markets, skilled labor, commerce, or people. Public efforts and lavish use of public money could alone make the place tolerable; but Congress doled out funds for this national and personal object with so sparing a hand, that their Capitol threatened to crumble in pieces and crush Senate and House under the ruins, long before the building was complete.

A government capable of sketching a magnificent plan, and willing to give only a half-hearted pledge for its fulfilment; a people eager to advertise a vast undertaking beyond their present powers, which when completed would become an object of jealousy and fear–this was the impression made upon the traveller who visited Washington in 1800, and mused among the unraised columns of the Capitol upon the destiny of the United States. …

A probable valuation of the whole United States in 1800 was eighteen hundred million dollars, equal to $328 for each human being, including slaves; or $418 to each free white. …  In New York and Philadelphia a private fortune of one hundred thousand dollars was considered handsome, and three hundred thousand was great wealth. Inequalities were frequent; but they were chiefly those of a landed aristocracy. Equality was so far the rule that every white family of five persons might be supposed to own land, stock, or utensils, a house and furniture, worth about two thousand dollars; and as the only considerable industry was agriculture, their scale of life was easy to calculate,—taxes amounting to little or nothing, and wages averaging about a dollar a day.

The Transition to Transfer Payment Government

Government in the United States, especially at the federal level, has become more about transfer payments and less about provision of goods and services.

Here a figure showing government transfer payments to individuals–everything from Social Security and Medicare/Medicaid to welfare payments of various kinds–as a share of GDP. The pattern is bumpy, but the overall upward rise in the last half-century from 5% of GDP back in the 1960s to about 15% of GDP in the last few years is clear. The lower blue line shows transfer payments to individuals from just the federal government. (The figure is created using the ever-helpful FRED website from the Federal Reserve Bank of St. Louis.)

Conversely, the category of \”government consumption expenditures and gross investment\” has been a generally falling share of GDP over time. The top red line shows this pattern for government overall, including federal, state and local government. The blue line shows just federal spending on \”government consumption expenditures and gross investment.\”

Back in the 1960s, the federal government was spending about 12-13% of GDP \”government consumption expenditures and gross investment,\” which was more than the state and local government spending of about 10% of GDP in these categories (as shown by the gap between the red and blue lines in the second figure). Now the federal government is spending about 7-8% of GDP on \”government consumption expenditures and gross investment,\” which is now less than state and local government spending of about 10% of GDP in this category. State and local government has continued to be about provision of goods and services, from education to roads/transportation to law enforcement. But over time, the federal government in particular has become less focused on \”government consumption expenditures and gross investment,\” and more focused on transfer payments.

The political economy of such a shift is simple enough: programs that send money to lots of people tend to be popular. But I would hypothesize that this ongoing shift not only reflects voter preferences, but also affect how Americans tend to perceive the main purposes of the federal government. Many Americans have become more inclined to think of federal budget policy not in terms of goods or services or investments that it might perform, but in terms of programs that send out checks.

For those who are interested, one way of measuring gross domestic product is to add up the sources of demand in the economy: consumption plus investment plus government plus exports minus imports. In this formula for GDP, the \”government\” category includes only \”government consumption expenditures and gross investment,\” as explained here by the US Bureau of Economic Analysis.  In contrast, transfer payments become income to those who receive them, and thus are counted in GDP when the recipients spend the money to consume a good or service.