Homeownership Rates: Some International Comparisons

High-income countries vary considerably in the share of households that own their own homes, The US rate of homeownership was about average by international standards 20-25 years ago, but now is below the average. Here are some facts from Laurie S. Goodman and Christopher Mayer, \”Homeownership and the American Dream\” in the Winter 2018 issue of Journal of Economic Perspectives (32:1, pp. 31-58).

\”The United States does not rank particularly high among other high-income countries when it comes to homeownership. Table 1 compares the homeownership rate from 1990 to 2015 across 18 countries where we have been able to obtain somewhat comparable data over the entire time period. The United States was ranked tenth in 1990, at the middle of the pack and close to the mean rate. By 2015, the United States was the fifth-lowest, its homeownership rate of 63.7 percent falling well below the 18-country average of 69.6 percent. Over the 1990–2015 period,  13 of the 18 countries increased their homeownership rates. The five countries with declines in homeownership were Bulgaria, Ireland, Mexico, the United Kingdom—and the United States.

\”In a broader sample of countries, many of which have missing data for some of the years in question, the United States homeownership rate in 1990 was slightly below the median and mean of the 26 countries reporting data. By 2015, the US ranked 35 of 44 countries with reliable data, and was almost 10 percentage points below the mean homeownership rate of 73.9 percent.\”

There are a lot of possible reasons for this variation, including \”culture, demographics, policies, housing finance systems, and, in some cases, a past history of political instability that favors homeownership.\” They offer an interesting comparison of how homeownership rates in the UK and Germany evolved after World War II (citations and footnotes omitted):

\”For example, consider the evolution of homeownership in (the former) West Germany and the United Kingdom. Both countries pursued a similar policy of subsidizing postwar rental construction to rebuild their countries. However, in intervening years, German policies allowed landlords to raise rents to some extent and thus finance property maintenance while also providing “protections” for renters. In the United Kingdom, regulation strongly discouraged private rentals, whereas the quality of public (rental) housing declined with undermaintenance and obtained a negative stigma. As well, German banks remained quite conservative in mortgage lending. The result was that between 1950 and 1990, West German homeownership rates barely increased from 39 to 42 percent, whereas United Kingdom homeownership rates rose from 30 to 66 percent. Interestingly, anecdotes suggest that many German households rent their primary residence, but purchase a nearby home to rent for income (which requires a large down payment but receives generous depreciation benefits). This allows residents to hedge themselves against the potential of rent increases in a system that provides few tax subsidies to owning a home.\”

By international standards, the US has had fairly generous mortgage interest deductions. Moreover, Goodman and Mayer walk though the question of whether owning a home in the US typically makes financial sense. Of course, buying a home at the peak of hosing prices circa 2006 and then trying to sell that home in 2008 is a losing proposition. But they argue that if Americans are buying a home at a typical price and willing and able to hold on to the home for a sustained time–say, buying in 2002 and holding on through 2015 or later–then housing pays off pretty well in comparison to alternative investments. They write:

\”Our results suggest that there remain very compelling reasons for most American households to aspire to become homeowners. Financially, the returns to purchasing a home in a “normal” market are strong, typically outperforming the stock market and an index of publicly traded apartment companies on an after-tax basis. Of course, many caveats are associated with this analysis, including variability in the timing and location of the home purchase, and other risks and tradeoffs associated with homeownership. There is little evidence of an alternative savings vehicle (other than a government-mandated program like Social Security) that would successfully encourage low-to-moderate income households to obtain substantial savings outside of owning a home. The fact that homeownership is prevalent in almost all countries, not just in the United States, and especially prevalent for people near retirement age, suggests that most households still view homeownership as a critical part of a life-cycle plan for savings and retirement.\”

Thus, owning a house is a kind of self-discipline that encourages saving. Also, buying a house in which to live is an investment that offers two kinds of returns: both the financial return when you sell, but also the fact that you can live inside your owner-occupied house, but not inside a stock portfolio.

Rising Interest Rates, but Easier Financial Conditions

The Federal Reserve has been gradually raising its target interest rate (the \”federal funds interest rate) for about two years, since early 2016. This increase has been accompanied by a controversy that I think of as a battle of metaphors. By raising interest rates, is the Fed stepping on the brakes of the economy? Or is it just easing off on the accelerator pedal?

To shed light on this controversy, it would be useful to to have a measure of financial conditions in the US economy that doesn\’t involve one specific interest rate, but instead looks at actual factors like whether credit is relatively available or not, whether leverage is high or low, and whether those who provide loans are able to raise money with relatively low risk. Fortunately, the Federal Reserve Bank of Chicago has been putting together a National Financial Conditions Index based on exactly these components. Here\’s a figure of the data going back to the 1970s.

This figure needs a little interpreting. Zero is when financial conditions are average. Positive numbers reflect when financial conditions are tight or difficult. For example, you can see that in the middle of the Great Recession, there is an upward spike showing that financial conditions were a mess and it was hard to raise capital or get a loan at that time. Several previous recessions show a similar spike. On the other side, negative numbers mean that financial conditions are fairly easy by historical standards to finance and receive loans. 

As the Chicago Fed explains: \”The National Financial Conditions Index (NFCI) and adjusted NFCI (ANFCI) are each constructed to have an average value of zero and a standard deviation of one over a sample period extending back to 1971. Positive values of the NFCI have been historically associated with tighter-than-average financial conditions, while negative values have been historically associated with looser-than-average financial conditions.\”

The interesting thing about our present time is that although the Fed has been raising its target interest rate since early 2016, financial conditions haven\’t gotten tighter. Instead the National Financial Conditions Index is lower now than it was back in early 2016; indeed, this measure is at its lowest level in about 25 years. At least for the last two years, any concerns that a higher federal funds interest rate would choke off finance and lending have been misplaced. Instead, having the Fed move the federal funds rate back close to its historically typical levels seems to have helped in convincing financial  markets that the crisis was past and normality was returning, so it was a good time to provide finance or to borrow.

The National Financial Conditions Index can also be broken down into three parts: leverage, risk, and credit. The Chicago Fed explains: \”The three subindexes of the NFCI (risk, credit and leverage) allow for a more detailed examination of the movements in the NFCI. Like the NFCI, each is constructed to have an average value of zero and a standard deviation of one over a sample period extending back to 1973. The risk subindex captures volatility and funding risk in the financial sector; the credit subindex is composed of measures of credit conditions; and the leverage subindex consists of debt and equity measures. Increasing risk, tighter credit conditions and declining leverage are consistent with increases in the NFCI. Therefore, positives values for each subindex have been historically associated with a tighter–than–average corresponding aspect of financial conditions, while negative values indicate the opposite.\”

Here\’s a figure showing the breakdown of the three components. Although the three lines do tend to rise and fall together, it seems clear that the blue line–showing the extent of leverage or borrowing–plays an especially large role in the fluctuations over the last 25 years. But right now, all three parts of the index are comfortably down in the negative numbers.

Patterns can turn, of course. Perhaps if the Federal Reserve increases the federal funds rate at its next scheduled meeting (March 20-21), financial conditions will worsen in some substantial way. But at least for now, the Federal Reserve raising interest rates back from the near-zero rates that had prevailed for seven years is having the (somewhat paradoxical) effect of being accompanied by  looser financial conditions. And concerns over raising those rates at least a little further seem overblown.

Nudge Policies

A considerable body of evidence suggests that people\’s decisions are affected by how a choice is presented, or what the default option looks like. There\’s a reason that grocery stores put some products at eye-level and some near the floor, or why the staples like milk and eggs are often far from the door (so you have to walk through the store to grab them), or why the checkout counters have nearby racks of candy. There\’s a reason that gas stations sometimes advertise that gas is 5 cents a gallon less if you pay cash, but never advertise that gas is 5 cents more per gallon if you don\’t pay cash. There\’s a reason that many people have their employer automatically deduct money from paychecks for their retirement accounts, rather than trying to make their own monthly or annual payments to that same account.

Once you have admitted that people\’s decisions are affected by these kinds of factors, an obvious question is whether public policy might make use of how decisions are presented to influence behavior. A decade ago in 2007, Richard H. Thaler and Cass R. Sunstein brought this possibility to public attention in their book Nudge: Improving Decisions About Health, Wealth, and Happiness.  For a sense of what has happened since then, Bob Holmes offers an overview in \”Nudging grows up (and now has a government job),\” which is subtitled, \”Ten years after an influential book proposed ways to work with — not against — the irrationalities of human decision-making, practitioners have refined and broadened this gentle tool of persuasion\” (Knowable Magazine, February 1, 2018).

(A side note here: Knowable Magazine  is a publication of the good folks who publish the Annual Review volumes familiar to academics. There are now about 50 of these volumes across a wide array of topics, from economics to entomology and from analytical chemistry to vision science. Like the title says, there is one volume per year on each subject, with each volume containing an array of papers written by prominent experts in the field describing what is happening in their area of research. The articles in the magazine take the Annual Review papers in the last few years as a starting point, and then publish an essay that draws out common themes–with references to the underlying papers for those who want the gritty details. In short, the magazine is a good place to get up to speed on a very wide range of topics across the sciences and social sciences in a hurry.)

As one example of a \”nudge\” policy. consider organ donation. In opinion polls, people overwhelmingly support being an organ donor. But in practice, fewer than half of adults are actually signed up. A nudge policy might suggest that all driver be automatically enrolled as organ donors–with a choice to opt out if they wish to do so. In other words, instead of framing the choice as \”do you want to sign up to be an organ donor?\”, the choice would become \”do you want to opt out of being an organ donor?\” As long as the choice is presented clearly, it\’s hard to argue that anyone\’s personal autonomy is being violated by the alternative phrasing of the question. But the alternative phrasing would lead to more organ donors–and the United States alone currently has about 100,000 people on waiting lists for organ transplants.

Perhaps the best-known example is that employers can either offer workers the option to enroll in a retirement savings plan, or they can automatically enroll workers in a retirement savings plans, with a choice to opt out. Phrasing the choice differently has a big effect on behavior. And a lot of people who never quite got around to signing up for the retirement plan end up regretting that choice when it\’s too late in life to do much about it. 
A graph shows that automatic enrollment leads to almost 100% of users contributing to their retirement savings, while “opt in” plans are much less successful.

Once you start thinking about nudging, possibilities blossom. Along with applications to organ donation and saving, Holmes discusses nudges related to home energy use, willingness to use generic drugs, choice of when to start receiving Social Security, requiring a common and simpler format for information about mortgages or phone contracts to make them easier to comprehend and compare,

Holmes reports: \”At last count, more than 60 government departments and international agencies have established “nudge units” tasked with finding and pulling the right behavioral levers to accomplish everything from increasing retirement savings to boosting diversity in military recruits to encouraging people to get vaccinated against flu. The United Kingdom\’s Behavioural Insights Team, one of the first and largest such units, has expanded from a handful of people in 2010 to about 100 today, with global reach. Clearly, nudging has moved into the mainstream.\”

Three broad concerns discussed by Holmes seem worth noting. First, nudges can often be very specific to context and detail. For example, when people in the UK got a letter saying that most people pay their taxes on time, the number of tax delinquents fell sharply, but the same nudge in Ireland had no effect. Sometimes small details of a government notification–like whether the letter includes a smiley face or not–seem to have a substantial effect. 
Second, the total effect of nudge policies may be only moderate. But saying that a policy won\’t totally solve, say, poverty or obesity hardly seems like a reason to rule out the policy. 
Finally, there is a legitimate concern over the line between \”nudge\” policies and government paternalism. The notion that government is purposely acting in subtle ways to shift our choices is mildly disturbing. What if you just sort of forget to opt out of being an organ donor–but you actually have genuine personal objections to doing so? What if you just sort of forget to opt out of the retirement savings account, but you know that you have a health condition that is extremely likely to give you a shortened life expectancy? A nudge policy can be beneficial on average, but still lead to less desirable choices in specific cases. 
Moreover, what if the goals of a nudge policy start to reach beyond goals like adequate retirement saving or use of generic drugs, and start edging into more controversial settings? One can imagine nudge policies to affect choices about abortion, or gun ownership, or joining the military, or enrolling your child in a charter school. No matter which direction these nudges are pushing, they would  certainly be controversial. 

In the Annual Review of Psychology for 2016, Cass Sunstein contributed an essay titled, \”The Council of Psychological Advisers.\” It begins: \”Many nations have some kind of council of economic advisers. Should they also have a council of psychological advisers? Perhaps some already do.\” For many people, the idea of a government council of psychological advisers seeking to set up your choices in such a way as to influence the outcome, in ways you don\’t even know are happening, will sound fairly creepy.

Like many people, I like to think of myself as someone who considers options and makes choices. But the reality of nudge policies calls this perception into doubt. For many real-world life choices, a truly neutral presentation of the options does not exist. There will always be a choice about the order in which options are presented, how the options are phrased, what background information is presented, what choice serves as the default option. Even when no official nudge policy exists, and all of these choices have been made for other reasons, the setting of the choice will often influence the choice that is made. It will influence me, and it will influence you, too. Thus, there isn\’t any escape from nudge policies. There is only a choice as to what kinds of nudges will happen–and a need for all of us to be aware of how we are being nudged and when we want to shove back by making other choices.

Network Effects, Big Data, and Antitrust Issues For Big Tech

You don\’t need to be a weatherman to see that the antitrust winds are blowing toward the big tech companies like Amazon, Facebook, Google, Apple, and others. But an immediate problem arises. At least under modern US law, being a monopoly (or a near-monopoly) is not illegal. Nor is making high profits illegal, especially when it is accomplished by providing services that are free to consumers and making money through advertising. Antitrust kicks in when anticompetitive behavior is involved: that is, a situation in which a firm takes actions which have the effect of blocking actual or potential competitors./

For example, the antitrust case against Microsoft that was settled back in 2001 wasn\’t that the firm was big or successful, bur rather that the firm was engaged in an anticompetitive practice of \”tying\” together separate products, and in this way trying to use its near-monopoly position in the operating systems that run personal computers to gain a similar monopoly position for its internet browser–and in this way to drive off potential competitors. .

In the case of big tech companies, a common theory is that they hold a monopoly position because of what economists call \”network effects.\” The economic theory of network effects started with the observation that certain products are only valuable if other people also own the same product–think of a telephone or fax machine. Moreover, the product becomes more valuable as the network gets bigger. When \”platform\” companies like Amazon or Facebook came along, network effects got a new twist. The idea became that if a website managed to gain a leadership position in attracting buyers and sellers (like Amazon, OpenTable, or Uber), or users and providers of content (like Facebook, YouTube, or Twitter), then others would be attracted to the website as well. Any potentially competing website might have a hard time building up its own critical mass of users, in which case network effects are acting as an anticompetitive barrier. 

Of course, the idea that an already-popular meeting place has an advantage isn\’t limited to the virtual world: many shopping malls and downtown areas rely on a version of network effects, too, as to stock markets, flea markets, and bazaars.

But while it\’s easy to sketch in the air an argument about network effects,  the question of how network effects work in reality isn\’t a simple one.  David S. Evans and Richard Schmalensee offer a short essay on \”Debunking the `Network Effects\’ Bogeyman: Policymakers need to march to the evidence, not to slogans,\” in Regulation magazine Winter 2017-18, pp. 36-39).

As they point out, lots of companies that might at the time seemed to have an advantage of \”network effects\”  have faltered: for example, eBay looked like the network Goliath back in 2001, but it was soon overtaken by Amazon. They write:

\”The flaw in that reasoning is that people can use multiple online communications platforms, what economists call `multihoming.\’   A few people in a social network try a new platform. If enough do so and like it, then eventually all network members could use it and even drop their initial platform. This process has happened repeatedly. AOL, MSN Messenger, Friendster, MySpace, and Orkut all rose to great heights and then rapidly declined, while Facebook, Snap, WhatsApp, Line, and others quickly rose. …

\”Systematic research on online platforms by several authors, including one of us, shows considerable churn in leadership for online platforms over periods shorter than a decade. Then there is the collection of dead or withered platforms that dot this sector, including Blackberry and Windows in smartphone operating systems, AOL in messaging, Orkut in social networking, and Yahoo in mass online media … 

\”The winner-take-all slogan also ignores the fact that many online platforms make their money from advertising. As many of the firms that died in the dot-com crash learned, winning the opportunity to provide services for free doesn’t pay the bills. When it comes to micro-blogging, Twitter has apparently won it all. But it is still losing money because it hasn’t been very successful at attracting advertisers, which are its main source of income. Ignoring the advertising side of these platforms is a mistake. Google is still the leading platform for conducting searches for free, but when it comes to product searches—which is where Google makes all its money—it faces serious competition from Amazon. Consumers are roughly as likely to start product searches on Amazon.com, the leading e-commerce firm, as on Google, the leading search-engine firm.\”

It should also be noted that if network effects are large and block new competition, they pose a problem for antitrust enforcement, too. Imagine that Amazon or Facebook was required by law to split into multiple pieces, with the idea that the pieces would compete with each other. But if network effects really are large, then one or another of the pieces will grow to critical mass and crowd out the others–until the status quo re-emerges.

A related argument is that big tech firms have access to Big Data from many players in a given market, which gives them an advantage. Evans and Schmalensee are skeptical of this point, too. They write:

\”Like the simple theory of network effects, the “big data is bad” theory, which is often asserted in competition policy circles as well as the media, is falsified by not one, but many counterexamples. AOL, Friendster, MySpace, Orkut, Yahoo, and many other attention platforms had data on their many users. So did Blackberry and Microsoft in mobile. As did numerous search engines, including AltaVista, Infoseek, and Lycos. Microsoft did in browsers. Yet in these and other categories, data didn’t give the incumbents the power to prevent competition. Nor is there any evidence that their data increased the network effects for these firms in any way that gave them a substantial advantage over challengers.

\”In fact, firms that at their inception had no data whatsoever sometimes displaced the leaders. When Facebook launched its social network in India in 2006 in competition with Orkut, it had no data on Indian users since it didn’t have any Indian users. That same year Orkut was the most popular social network in India, with millions of users and detailed data on them. Four years later, Facebook was the leading social network in India. Spotify provides a similar counterexample. When Spotify entered the United States in 2011, Apple had more than 50 million iTunes users and was selling downloaded music at a rate of one billion songs every four months. It had data on all those people and what they downloaded. Spotify had no users and no data when it started. Yet it has been able to grow to become the leading source of digital music in the world. In all these and many other cases the entrants provided a compelling product, got users, obtained data on those users, and grew.

\”The point isn’t that big data couldn’t provide a barrier to entry or even grease network effects. As far as we know, there is no way to rule that out entirely. But at this point there is no empirical support that this is anything more than a possibility, which one might explore in particular cases.\”

Evans and Schmalensee are careful to note that they are not suggesting that online platform companies should be exempt from antitrust scrutiny, and perhaps in some cases the network and data arguments might carry weight. As they write:

\”Nothing we’ve said here is intended to endorse a “go-easy” policy toward online platforms when it comes to antitrust enforcement. … There’s no particular reason to believe these firms are going to behave like angels. Whether they benefit from network effects or not, competition authorities ought to scrutinize dominant firms when it looks like they are breaking the rules and harming consumers. As always, the authorities should use evidence-based analysis grounded in sound economics. The new economics of multisided platforms provides insights into strategies these firms may engage in as well as cautioning against the rote application of antitrust analysis designed for single-sided firms to multisided ones.

\”It is time to retire the simple network effects theory—which is older than the fax machine—in place of deeper theories, with empirical support, of platform competition. And it is not too soon to ask for supporting evidence before accepting any version of the “big data is bad” theory. Competition policy should march to the evidence, not to the slogans.\”

For an introduction to the economics of multi-sided \”platform\” markets, a useful starting point is Marc Rysman\’s \”The Economics of Two-Sided Markets\” in the Summer 2009 issue of the Journal of Economic Perspectives (23:3, 125-43). 

For an economic analysis of policy, the underlying reasons matter a lot, because they set a precedent that will affect future actions by regulators and firms. Thus, it\’s not enough to rave against the size of Big Tech. It\’s necessary to get specific: for example, about how public policy should view network effects or online buyer-and-seller platforms, and about the collection, use, sharing, and privacy protections for data. We certainly don\’t want the current big tech companies to stifle new competition or abuse consumers. But in pushing back against the existing firms, we don\’t want regulators to set rules that could close off new competitors, either. 

Four Examples from the Automation Frontier

Cotton pickers. Shelf-scanners at Walmart. Quality control at building sites. Radiologists. These are just four examples of jobs that are being transformed and even sometime eliminated by the newest wave of automated and programmable machinery. Here are four short stories from various sources, which of course represent a much broader transformation happening across the global economy.
_____________________________

Virginia Postrel discusses \”Lessons From a Slow-Motion Robot Takeover: Cotton harvesting is now dominated by machines. But it took decades to happen\” (Bloomberg View, February 9, 2018). She describes a \”state-of-the-art John Deere cotton stripper.\” It costs $700,000, and harvests 100-120 acres each day. As it rolls across the field, \”every few minutes a plastic-wrapped cylinder eight feet across plops out the back, holding as much as 5,000 pounds of cotton ready for the gin.\” Compared to the old times some decades back of cotton-picking by hand, the machine replaces perhaps 1,000 workers.

One main lesson, Postrel emphasizes, is that big technological changes take time, in part because they often depend on a group of complementary innovations becoming available. In this case: \”Gins had to install dryers, for instance, because machine-harvested cotton retained more moisture. Farmers needed chemical defoliants to apply before harvesting so that their bales wouldn’t be contaminated with leaf trash. Breeders had to develop shorter plants with bolls that emerged at the same time, allowing a single pass through the fields.\” Previous farm innovations often took decades to diffuse, too: as I\’ve mentioned before on this website, that was the pattern for previous farm breakthroughs like the McCormick reaper and the tractor.

The high productivity of the modern cotton-stripper clearly costs jobs, but although it\’s easy for me to say, these were jobs that the US is better off without. Cotton-picking by hand was part of a social system built on generations of low-paid, predominantly black workers. And inexpensive clothing, made possible by cotton harvested more efficiently, is important for the budgets of low-income families.

____________________
Another example mentioned by Postrel is the case of robots at Walmart that autonomously roam the aisles, \”identifying when items are out of stock, locating incorrect prices, and detecting wrong or missing labels.\” Erin Winick tells the story in \”Walmart’s new robots are loved by staff—and ignored by customers\” Bossa Nova is creating robotic coworkers for the retail world\” (MIT Technology Review, January 31, 2018).

Again, these robots take jobs that a person could be doing. But the article notes that the robots are quite popular among the Walmart staff, who name the robots, make sure the robots are wearing their official Walmart nametags, and introduce the robots to customers. From the employee point of view, the robots are taking over the dull and menial task of scanning shelves–and the employees are glad to hand over that task. Apparently some shoppers are curious about the robots, and ask, but lots of other shoppers just ignore them and navigate around them.

_______________________

An even more high-tech example is technology which uses lidar-equipped robots to do quality control on construction sites. Even Ackerman explaines in \”AI Startup Using Robots and Lidar to Boost Productivity on Construction Sites Doxel\’s lidar-equipped robots help track construction projects and catch mistakes as they happen\” (IEEE Spectrum, January 24, 2018).

On big construction projects, the tradition has been that at the end of the workday, someone walks around and checks how everything is going. The person carries a clipboard and a tape measure, and spot-checks key measurements. This technology sends in a robot at the end of the day, instead, programmed to crawl all around the building site. It\’s equipped with lidar, which stands for \”Light Detection and Ranging,\” which essentially means using lasers to measure distances. It can check exactly what has been installed, and that it is installed in precisely the right place. Perhaps the area that is going to be the top of a staircase is not precisely aligned with the bottom? The robot will know. Any needed changes or corrections can thus happen much sooner, rather than waiting until a problem becomes apparent later in the building process.

As Ackerman writes: \”[I]t may or may not surprise you to learn that 98 percent of large construction projects are delivered (on average) 80 percent over budget and 20 months behind schedule. According to people who know more about these sorts of things than I do, productivity in the construction industry hasn’t improved significantly in 80 years.\” In a pilot study on one site,this technology raised labor productivity by 38%–because workers could fix little problems now, rather than bigger problems later.

But let\’s be honest: At least in the immediate short-run, this technology reduces the need for employment, too, because fewer workers would be needed to fix problems on a given site. Of course, ripping out previous work and reinstalling it again, perhaps more than once, isn\’t the most rewarding job, either. And the ultimate result is not just a building that is constructed more efficiently, but a building that is likely to be longer-lasting and perhaps safer, too.
___________________

A large proportion of hospital patients have some kind of imaging scan: X-ray, MRI, CAT, and so on. Diagnostic radiologists are the humans who look at those scans and interpret them. Could most of their work be turned over to computers, with perhaps a few humans in reserve for the tough cases?

Hugh Harvey offers a perspective in \”Why AI, Will Not Replace Radiologists,\” (Medium: Towards Data Science, January 24, 2018).  As Harvey notes: \” In late 2016 Prof Geoffrey Hinton, the godfather of neural networks, said that it’s “quite obvious that we should stop training radiologists.\” In constrast, Harvey offers arguments as to \”why diagnostic radiologists are safe (as long as they transform alongside technology).\” The parenthetical comment seems especially important to me. Technology is especially good at taking over routine tasks, and the challenge for humans is to work with that technology while doing the nonroutine. For example, even if the machines can do a first sort-through of images, many patients will continue to want a human to decide what scans should be done, and with whom the results can be discussed. For legal reasons alone, no institution is likely to hand over life-and-death personal decisions to an AI program completely.

In addition, Harvey points out that as AI makes it much cheaper to do diagnostic scans, a likely result is that scanning technologies will be used much more often, and will be more informative and effective. Harvey\’s vision is that radiologists of the future \” will be increasingly freed from the mundane tasks of the past, and lavished with gorgeous pre-filled reports to verify, and funky analytics tools on which to pour over oceans of fascinating ‘radiomic’ data.\”
_____________________

The effects of technology will vary in important ways across jobs, and I won\’t twist myself into knots trying to draw out common lessons across these four examples. I will say that embracing these four technologies, and many more, is the only route to long-term economic prosperity.

Olympic Economics

Before settling into my sofa for a couple of weeks of watching the athletes slip and slide through Winter Olympics from PyeongChang, I need to confess that the Games are a highly questionable economic proposition. One vivid illustration is that the $100 million new stadium in which the opening ceremonies will be held is going to be used four times in total–opening and closing of the Winter Olympics, opening and closing the Paralympics next month–and then it will torn down. Andrew Zimbalist goes through the issues in more detail in \”Tarnished Gold: Popping the Olympics Bubble,\” which appears in the Milken Institute Review (First Quarter 2018).

Building new facilities (or dramatically refurbishing older ones) is a major cost for the Games. Zimbalist notes that the previous Winter Games in 2014, \”the IOC embraced an ostentatious bid from Sochi for the 2014 Winter Olympics where almost none of the required venues or infrastructure were in place. It became the most expensive Olympics in history, with Russia ponying up between $50 billion and $67 billion — though how much of that actually went into construction and operations is unclear.\”

As it has become abundantly clear that that direct revenues that a host city receives from the Olympics–for example, revenues related to tickets and television rights–typically cover only about one-third of the costs of hosting, fewer cities are bidding  to host the Games.  The 2022 Winter Games came down to two bidders, Beijing in China and Almaty in Kazakhstan. Beijing \”won,\” as Zimbalist describes:

\”The Beijing organizing committee pitched its bid to the IOC by noting that it would use some venues left over from the 2008 summer Olympics. But China went along with the IOC’s penchant for creative accounting by excluding the cost of the high-speed railroad that will link Beijing to the downhill and cross-country ski areas (54 miles and 118 miles from the capital, respectively). That project will run to about $5 billion and have little value to the region after the Games are over.

\”Also excluded from the Beijing budget will be the substantial expense of new water diversion and desalination programs necessary for hosting Winter Games in China’s water- (and snow-) starved northern cities. North China has only 25 percent of the country’s water resources to supply nearly 50 percent of the population. Accordingly, China launched an $80 billion water-diversion program from the south before the 2008 summer Olympics.

\”But the north\’s water availability still remains below what the United Nations deems to be the critical level for health – let alone for an Olympics extravaganza. Zhangjiakou, the site of the Nordic skiing competition, gets fewer than eight inches of snow per year. Yanqing, the site of the Alpine skiing events, gets less than 15 inches of precipitation annually.

\”Both areas will thus require copious water for artificial snowmaking. But even if China manages to complete the necessary infrastructure for water diversion, it will amount to robbing Peter to pay Paul: Beijing, Zhangjiakou and Yanqing lie in one of China\’s most important agricultural regions, producing sorghum, corn, winter wheat, vegetables and cotton.

\”The government, moreover, is apparently counting on lasting value from the construction of the Winter Games, creating permanent ski resorts in the mountains bordering Inner Mongolia and the Gobi Desert. If the ski resorts survive, only China\’s richest residents will be able to afford them, while food supplies – and the incomes of the growers – will suffer.

\”Another strike against Beijing 2022 is that winter is one of the worst times for air pollution in this horribly polluted city. Deforestation of the northern mountains needed for Games infrastructure will only compound the problem.

\”One might wonder why, in light of the daunting complications of hosting the Winter Games in northern China, Beijing got the nod in the first place. The answer is simple: thanks to the prospect of big deficits, the only other city bidding was Almaty, capital of oildrenched Kazakhstan, which has been ruled with an iron fist by the kleptocrat Nursultan Nazarbayev since independence in 1991.\”

Zimbalist doesn\’t offer parallel estimates for the PyeongChang games. The standard estimate floating round seems to be that South Korea will spend about $13 billion on facilities for the Winter Games, but such estimates turn out to be too low. Also, this amount doesn\’t include infrastructure like a high-speed rail line over the 80 miles between Seoul and PyeongChang. A few years ago, analysts from the Hyundai Research Institute pegged these additional infrastructure costs at $43.8 billion.

The economic case for hosting the Olympics thus needs to rely on indirect benefits: short-term construction jobs before the Games, tourist spending during the Games, infrastructure and recognition that could last after the Games. Looking at past Olympics, such benefits are quite uncertain. The best-case economic scenario for the  PyeongChang Games may be the Salt Lake City Winter Games of 2002. The underlying reason is that this area was an attractive and reachable destination for winter sports, but somewhat underappreciated before the Games. It\’s visibility and tourism seemed to get a long-term boost from the Games. Indeed, Salt Lake City has just announced that it would be interested in hosting the Games again in 2026 or 2030.

However, other homes for the Winter Games in recent decades have not succeeded in the same way: either the destination was already fairly popular for winter activities, and thus didn\’t receive a long-term tourism boost, or the area just didn\’t get a long-term boost. Here\’s the roll call of locations for the last 10 Winter Games: Sochi (2014), Vancouver (2010), Turin (2006), Salt Lake City (2002), Nagano (1998), Lillehammer (1994), Albertville (1992), Calgary (1988), Sarajevo (1984), Lake Placid (1980).

For the PyeongChang Games, ticket sales have not been brisk. Television ratings seem likely to be fine, but with the big time difference and people who access their media in other ways, they may not be great. Spending on facilities seems to have been kept under control, although this may also mean that the details of cost overruns haven\’t yet filtered out. Even the International Olympic Committee, not known for encouraging parsimony, has warned publicly that many of the new venues may become useless after the Games.

The ultimate economic payoff is likely to depend on whether PyeongChang becomes a considerably more prominent destination for winter tourist activities in the years after the Games. On the downside, PyeongChang at present has a small population (about 44,000), and it\’s night life,  restaurants, and hotels are correspondingly limited. It also about 40 miles from the demilitarized zone separating South and North Korea, which might make potential tourists uncertain about plunking down money for reservations. On the upside, income levels have been growing rapidly in east Asia, especially in China. The demand for tourist destinations is rising. There are a number of successful South Korean ski resorts already. PyeongChang will almost have economic costs far exceeding the benefits. But it has a reasonable chance of less red ink than other recent Winter Games, and seems likely to do far better on a cost-benefit calculation than either its predecessor in Sochi or its successor in Beijing.

For more on economics and the Olympics, here\’s a discussion from Zimbalist of why Boston opted out of even trying to host the 2024 Summer Games, and here\’s a discussion of an n article Robert A. Baade and Victor A. Matheson that appeared in the Spring 2016 issue of the Journal of Economic Perspectives, \”Going for the Gold: The Economics ofthe Olympics,\”

What Charter Schools Can Teach the Rest of K-12 Education

If you\’re interested in how K-12 schools might improve their performance, charter schools can be viewed as a laboratory experiment. Sarah Cohodes discusses the lessons they have to teach in \”Charter Schools and the Achievement Gap,\” written as a Policy Issue paper for Future of Children (Winter 2018). From a social science view, charter schools have two especially useful properties: there are enough of them to have a reasonable sample size, and a number of them are required to admit students through a lottery process–which means that those randomly selected to attend the charter can be compared with those randomly not selected to do so. Cohodes notes:

\”The first charter schools were created in Minnesota in 1993. Forty-three states and Washington, DC, now have laws that permit the operation of charter schools, and around 7,000 charter schools now serve more than 5 percent of students in the United States.1 They’ve grown steadily over the past 10 years, adding about 300 or 400 schools each year. To put this in perspective, about 10 percent of US students attend a private school, and 3 percent are homeschooled. …

\”Because charter schools are required by law to admit students by a random lottery if they’re oversubscribed, charter school admissions are analogous to an experiment in which participants are randomly assigned to a treatment group or a control group (a randomized controlled trial). After accounting for important details that arise from operating a lottery in the real world versus doing so purely for research purposes, such as sibling preferences and late applicants, a random lottery assigns the seats for charter schools that are oversubscribed. This allows researchers to compare the outcomes of a treatment group of students who were offered a seat in the lottery to a control group of those who were not.\”

Of course, it\’s also possible to study the effects of attending charter schools by finding a good comparison group outside the school, in what is called an \”observational\” study, but choosing an  appropriate comparison group necessarily adds an element of uncertainty. The research on charter schools finds that on average, they perform about the same as K-12 schools. But charter schools  vary considerably: some do worse than K-12 schools, but some do better.

\”The best estimates find that attending a charter school has no impact compared to attending a traditional public school. That might surprise you if you were expecting negative or positive impacts based on the political debate around charter schools. But using both lottery-based and observational estimates of charter school effectiveness in samples that include a diverse group of charter schools, the evidence shows, on average, no difference between students who attend a charter and those who attend a traditional public school. However, much of the same research also finds that a subset of charter schools has significant positive impacts on student outcomes. These are typically urban charter schools serving minority and low-income students that use a no excuses curriculum.\”

 Here\’s how the \”no escuses approach\” is defined in a couple of the lottery-based studies:

\”In Massachusetts, school leaders were asked whether their school used the no excuses approach, and schools that did so tended to have better results. The study also drilled down to examine specific practices associated with no excuses. It found that a focus on discipline, uniforms, and student participation all predicted positive school impacts, with the important caveat that no excuses policies are often implemented together, so that it’s difficult to separate the correlations for individual characteristics.

\”The New York City study aggregated school characteristics into practice inputs and resource inputs. The practice inputs followed no excuses tenets: intensive teacher observation and training, data-driven instruction, increased instructional time, intensive tutoring, and a culture of high expectations. The resource inputs were more traditional things like per-pupil-spending and student-teacher ratios. The study found that the each of the five practice inputs, even when controlling for the others, positively correlated with better charter school effectiveness; the resource inputs did not.\”

Cohodes offers an overview of the empirical studies on charter schools. Of course, such studies need to take into account possibilities like whether better-qualified students are applying to charters in the first place, or whether charters may benefit from using disciplinary or suspension policies not allowed in other schools, and so on. But here\’s a bottom line:

\”Attending an urban, high-quality charter school can have transformative effects on individual students’ lives. Three years attending one of these high-performing charter schools produces test-score gains about the size of the black-white test-score gap. The best evidence we have so far suggests that these test-score gains will translate into beneficial effects on outcomes like college-going, teen pregnancy, and incarceration. Given the large and potentially longer-term effects, the most effective charter schools appear to hold promise as a way to reduce achievement gaps.\”

If swallowing the entire \”no excuses\” approach is too much, the one practice that seems most important to me is intensive tutoring, so that students don\’t fall so far behind that they lose touch with the classroom. She writes:

\”One charter school practice stood out: high-quality tutoring. Many high-quality charter schools require intensive tutoring as a means of remediation and learning, often incorporating one-on-one or small group tutoring into the school day rather than as an add-on or optional activity. … As a strategy to close achievement gaps, adopting intensive tutoring beyond the charter sector may be less controversial than focusing explicitly on charter schools.\”

There are both direct and indirect lessons from charter schools. Cohodes focuses on the direct lessons: a \”no excuses\” approach that includes intensive tutoring. The indirect lesson is that it\’s useful to have experimentation in how K-12 education is provided, and then to have those experiments evaluated rigorously, so that productive ideas have a better chance to spread.

Readers interested in more on lessons from charter schools might start with \”The Journey to Becoming a School Reformer\” (February 13, 2015), which describes how Roland Fryer, an economist and school reformer, first sought to figure out the key elements of charter school success and then to apply them in public schools in Houston and elsewhere. Also, Julia Chabrier, Sarah Cohodes, and Philip Oreopoulos offer a discussion of \”What Can We Learn from Charter School Lotteries?\” in the Summer 2016 issue of the Journal of Economic Perspectives (30:3, 57-84).

US History and the Path to European Integration

The early history of the United States involves a time when mobility of labor, goods, and capital between the 13 states was often costly and difficult, and when a weak central government had little power to address regional imbalances. But over time, the US political system and economy knitted themselves together. Thus, as the European Union seeks to increase the freedom of labor, goods, and capital to move across national borders, in a setting with a weak central European government, it is natural to reconsider some parallels to early US history. Along these lines, Jacob Funk Kirkegaard and Adam S. Posen have edited a collection of five essays for the European Commission, published in a report called Lessons for EU Integration from US History (January 2018, Peterson Institute for International Economics).

An underlying theme of the report is that the task of European policymakers has been made broader and more complex that it was back in the 1990s, before the euro came into existence. Kierkegaard and Posen write in an introductory essay: \”Monetary unification cannot stand stably on its own without additional integration of banking and capital markets, and some fiscal policies.\” Thus, the list of essays is:

  1. \”Realistic European Integration in Light of US Economic History,\” by Jacob Funk Kirkegaard and Adam S. Posen
  2. \”A More Perfect (Fiscal) Union: US Experience in Establishing a Continent‐Sized Fiscal Union and Its Key Elements Most Relevant to the Euro Area,\” by Jacob Funk Kirkegaard
  3. \”Federalizing a Central Bank: A Comparative Study of the Early Years of the Federal Reserve and the European Central Bank,\” by Jérémie Cohen‐Setton and Shahin Vallée
  4. \”The Long Road to a US Banking Union: Lessons for Europe,\” by Anna Gelpern and Nicolas Véron
  5. The Synchronization of US Regional Business Cycles: Evidence from Retail Sales, 1919–62,\” by Jérémie Cohen‐Setton and Egor Gornostay

Of course, the point isn\’t that Europe is or should be following in the footsteps of US history. As the authors write:

\”It is not important whether the European Union is integrating more or less quickly than the United States did. Such abstract benchmarking misses all the important points about the nature and sequencing of integration as political processes. The many fundamental differences between the United States and the European Union prevent drawing too precise, let alone literal, a mapping from US economic development to Europe’s path forward today. … Rather than pointing towards the current state of US continental integration as the guide for the European Union, we analyze the US responses throughout history to economic and political challenges and to numerous domestic political constraints—some not unlike what Europe faces today. We believe that EU leaders should draw lessons from these US responses for how, how far, and how fast their aspirations for EMU should progress. Yet, it must be acknowledged that the United States solved most of its political and economic challenges through centralization and federal government institution building.\”

Kierkegaard and Posen put together a thought-provoking list of nine \”Themes of US Economic Integration over the Long Run,\” which are of course explored in more detail in the essays that follow. Here\’s a sampler:

Institution Building Requires Repeated Attempts and Often Constitutional Revision: … The US Constitution itself has been updated, or amended, 27 times.  … [T]he first two central banks in the United States were closed down, and the initial monetary policy architecture of today’s Federal Reserve required repeated and far‐reaching reform in the first two decades after its founding. …  Economic integration cannot be limited forever to satisfy those who are averse to change.  … 

Fiscal Integration Takes a Very Long Time: From the beginning the US federal government had the power to issue its own debt, but for the first more than 130 years of American history it did so sparingly and essentially only to finance the nation’s wars. Only by the 1930s did outstanding US federal government debt permanently exceed total state and local government debt. …

The Right Fiscal Sequencing Is to First Identify the Need and then to Find the Resources: The US federal government budget expanded gradually, but each expansion generally followed the same clear political sequence. Congress would identify a problem that required a nationally consistent solution and would then proceed to find the necessary funding for it. Frequently, the federal government dedicated or earmarked particular revenue sources to solving specific preidentified problems. …

Large Centralized Fiscal Capacity Synchronizes Regional Business Cycles: The increasing synchronization of US business cycles across a diverse and continental‐sized economy occurred only after the dramatic increase in the federal government’s fiscal role in the 1930s New Deal (and subsequently World War II). Previously, a pattern of divergent regional booms and busts was the costly norm even as markets integrated over decades. …  US history suggests that European policymakers ought instead to contemplate the creation of a specialized asymmetric shock absorption instrument for at least the euro area. …

New Centralized Institutions Unite Opposition and Can be Vulnerable to Regulatory Arbitrage:  … In Europe, EMU itself, as originally designed in the Maastricht Treaty, is of course the most prominent example of a half‐built house that ultimately suffered a regionally driven crisis. This led to scapegoating for being too centralized, when the problem was that it was insufficiently so.

Only Complete Fiscal Support for the Lender of Last Resort Removed Redenomination Risk: During the early decades following the Federal Reserve System’s founding in 1913, negative feedback (or doom) loops akin to those in the euro crisis materialized between regional banking sectors, state governments, and the nonfinancial private sector in the same region(s). Only after the comprehensive reforms initiated by President Franklin Roosevelt—including the potentially unlimited fiscal support for the Federal Reserve Board and regional Federal Reserve banks and the establishment of the Federal Deposit Insurance Corporation (FDIC) with a federal fiscal backstop—did interregional differentials in interest rate and risk perceptions end. US history thus implies that only similarly credible actions to support the European Central Bank (ECB) and banking supervisors will alleviate stubborn country‐specific redenomination risks inside the euro area.

Central Absorption of Government Responsibilities Often Occurs Following State‐Level Policy Failures: Important additions to US federal government responsibilities historically took place as partial state‐level services provision collapsed financially. … Generally available old‐age pension provision through Social Security and unemployment benefits were introduced during the Great Depression, as similar programs existing in just a few states became unsustainable. And federal deposit insurance was similarly adopted in 1933, following the largest financial panic in a
sequence of them, when a wave of failures spread among smaller state‐level insurance
schemes. …

Few Core Government Functions Are Exclusively State or Federal Responsibilities: … [I]n practice, the federal government has only very few exclusive responsibilities, such as defense or foreign affairs. Many core social insurance and regulatory responsibilities are in practice carried out through state‐federal government partnerships both institutionally and financially. …

National Security Crises and Other External Pressures Are Important Integrationist Forces: Jean Monnet is famously credited for suggesting that the European Union would be forged from the group’s responses to its successive crises. The same is true for many of the core institutions of the American central government, but primarily these were security crises (economic crises, as noted, were usually insufficient to prompt greater integration on their own, despite their evident costs). … The vast majority of American federal government institutions created in crisis periods have subsequently been maintained. …

For some earlier thoughts about US history and lessons for European economic unification, see:

Behind the Declining Labor Share of Income

Total income earned can be divided into what is earned by labor in wages, salaries, and benefits, and what is earned by capital in profits and interest payments. The line between these categories can be a blurry: for example, should the income received by someone running their own business be counted as \”labor\” income received for their hours worked, or as \”capital\” income received from their ownership of the business, or some mixture of both? 

However, the US Bureau of Labor Statistics has been doing this calculation for  decades using a standardized methodology over time. The US labor share of income was in the range of 61-65% from the 1950s up through the 1990s. Indeed, for purposes of basic long-run economic models, the share was sometimes treated as a constant. But in the early 2000s, the labor share started dropping and fell to the historically low range of 56-58%. Loukas Karabarbounis and Brent Neiman provide some perspective on what has happened, citing a lot of the recent research. in \”Trends in Factor Shares: Facts and Implications,\” appearing in the NBER Reporter (2017, Number 4).


They built up a data set for a range of countries, and found that many of them had experienced a decline in labor share. Thus, the underlying economic explanation is unlikely to be a purely US factor, but instead needs to be something that reaches across many economies. They write: \”The decline has been broad-based. As shown in Figure 1, it occurred in seven of the eight largest economies of the world. It occurred in all Scandinavian countries, where labor unions have traditionally been strong. It occurred in emerging markets such as China, India, and Mexico that have opened up to international trade and received outsourcing from developed countries such as the United States.\”

They argue that one major factor behind this shift is cheaper information technology, which encouraged firms to substitute capital for labor. They write:

\”There was a decline in the price of investment relative to consumption that accelerated globally around the same time that the global labor share began its decline. A key hypothesis that we put forward is that the decline in the relative price of investment, often attributed to advances in information technology, automation, and the computer age, caused a decline in the cost of capital and induced firms to produce with greater capital intensity. If the elasticity of substitution between capital and labor — the percentage change in the capital-labor ratio in response to a percentage change in the relative cost of labor and capital — is greater than one, the lowering of the cost of capital results in a decline in the labor share…. [O]ur estimates imply that this form of technological change accounts for roughly half of the decline in the global labor share. …

\”If technology explains half of the global labor share decline, what might explain the other half? We use investment flows data to separate residual payments into payments to capital and economic profits, and find that the capital share did not rise as it should if capital-labor substitution entirely accounted for the decline in the labor share. Rather, we note that increases in markups and the share of economic profits also played an important role in the labor share decline.\” 

The fall in the labor share of income has consequences that ripple through the rest of the global economy. For example, it contributes to the rise in inequality. Another change from a few decades ago is that corporations used to raise money from household savers, by issuing bonds, taking out loans, or selling stock. But with the rise in the capital share and corporate profits, about two-thirds of global investments is financed by firms themselves. Indeed, it used to be that there were net flows of financial capital into the corporate sector; now, there are net flows of financial capital out of the corporate sector (through stock buy-backs, the rise in corporate cash holdings, and other mechanisms). When comparing current stock prices and price-earnings ratios to historical values, it\’s worth remembering when the capital share of income is higher, stock prices represent a different value proposition than they did several decades ago.

For previous posts on the declining labor share of income, see:

Could Driverless Trucks Create More Trucking Jobs? Uber Says "Maybe"

Could driverless trucks create more trucking jobs? It sounds logically impossible. But remember that automatic teller machines did not reduce the number of jobs bank tellers, and may even have increased it slightly,  because it changed altered the range of tasks typically done by a bank teller.  IN general, new technology doesn\’t just alter a single dimension of an industry, but can lead to complementary changes as well. Uber Advanced Technologies Group (!) spells out a scenario in which driverless trucks lead to more trucking jobs in \”The Future of Trucking: Mixed Fleets, Transfer Hubs, and More Opportunity for Truck Drivers\” (Medium, February 1, 2018).

Imagine that with the arrival of driverless trucks, the trucking industry splits into two parts: long-distance driverless trucks, which operate almost mostly on highways and large roads between a network of \”transfer hubs,\” and short-distance trucks with human drivers, which take the trucks from transfer hubs to local addresses. As the Uber authors point out:

\”The biggest technical hurdles for self-driving trucks are driving on tight and crowded city streets, backing into complex loading docks, and navigating through busy facilities. At each of the local haul pick ups and drop offs, there will need to be loading and unloading. These maneuvers require skills that will be hard for self-driving trucks to match for a long time. By taking on the long haul portion of driving, self-driving trucks can ease some of the burden of increasing demand, while also creating an opportunity for drivers to shift into local haul jobs that keep them closer to home.\”

The crucial part of the scenario is that most trucks, given their human drivers, are now on the road for only about one-third of every day. However, the long-distance driverless trucks could be on the road two-thirds or more of every day. As a result, the costs of long-distance shipping would drop substantially, which in turn would give firms and consumers an incentive to expand the quantity of what they ship by truck. In one simulation that has 1 million driverless long-distance trucks on the road, the result is an additional 1.4 million drivers needed for shorter-haul local trucking.

For a great many truckers, short-hauling offers a better lifestyle, in part because you can sleep in your own bed every night. It\’s not clear how wages might adjust in response to these kinds of changes. Wages for long-haul truckers might fall,  because competing with driverless technology on those routes would be tough, but the shift in wages for short-haul truckers depends on other ways in which the industry might evolve. The Uber folks are trying to crowd-source the economic analysis here by putting their models and data up on a GitHub site, so if this kind of analysis floats your boat (or you want to assign it as a student project), you have an option here.

Just to be clear, I\’m not endorsing the scenario that 10 years from now, there will be a million autonomous trucks on US highways and even more truckers in the short-haul business. But I am endorsing the broader point that a simple \”technology replaces jobs\” story–even one as seemingly straightforward as how autonomous trucks will affect the number of truck drivers–is always more complex and sometimes even counterintuitive to how it may appear at first glance.