The Fourth Industrial Revolution and the Future of Work

I occasionally pause for a moment to remember that just a few years ago, before we were concerned that AI would take all the jobs, we were worried that new robotics technologies would take all the jobs. And before that, we were worried that automation would take all the jobs. Can we learn something from previous industrial revolutions about how the current one?

Arthur H. Goldsmith delivered the Presidential Address to the Southern Economic Association in 2022 on “The 4th Industrial Revolution and the Future of Work: Reasons to Worry and Policies to Consider.” The written version (co-authored with James F. Casey) is now posted online prior to publication in the Southern Economic Journal.

What were the first three industrial revolutions? As Goldsmith tells it:

The First IR emerged in England around 1765 with the advent of the steam engine to mechanize production, especially in the agriculture, mining, and transportation sectors. … The Second IR arrived in 1865, roughly a 100 years after the first. New sources of power—electricity and the internal combustion engine—were the signature technological advancements of the Second IR. … The Third IR—the Digital Revolution—begins around 1970, once again about a century after the preceding IR. Digitization entails representing information in bits, and the 3rd IR is about how a new collection of machines, and advances in coding (i.e., HTML) are used to—store, transfer, and analyze data (Goldfarb & Tucker, 2019). Introduction of the personal computer, super computers, and the internet of things enabled automation of industrial processes, space exploration, and dramatic advances in telecommunications, and science through research and development, such as the Human Genome Project.

So what’s the Fourth Industrial Revolution? Goldsmith argues that the technological changes of the last decade or two are not just a continuation of the Digital Revolution, but qualify as a separate Industrial Revolution:

Building on the era of digitization a dazzling array of new technologies have emerged in recent years including—3-D printing, nanotechnology, artificial intelligence, machine learning, quantum computing, big data, and cloud storage. Simultaneously, there have been vast improvements in existing technologies such as industrial robots, vision systems, sensors, and algorithms. The 4th IR is the result of integrating these technologies in creative and productive ways with robotics and artificial intelligence at the center of the transformation. Perhaps the most visible form of this evolution is Generative AI, which combines machine learning and artificial intelligence—guided by the architecture of the human brain—to learn about countless relations and patterns through exposure to vast amounts of data. This technology is then capable of producing data including text, images, audio, and code. …

The velocity, scope, and systems (i.e., production, management, and governance) impact of the 4th IR is striking. Robot density, the number of industrial bots per 10,000 workers, a standard barometer of manufacturing automation doubled globally between 2017 and 2022 (Heer, 2021, 2024)—an extraordinary pace—while in the U.S. robot density more than tripled between 1995 and 2017 (Bharadwaj & Dvorkin, 2019) and rose another 12% between 2020 and 2022. Likewise, the velocity of artificial intelligence activity in recent years is impressive. Corporate artificial intelligence investment spending in the U.S. rose by 423%, from 13 to 68 billion, between 2015 and 2020 (Statista Research Team, 2022), and global growth in artificial intelligence investment advanced at an even greater rate (Thormundsson, 2023) from 13 billion in 2015 to 92 billion in 2022.

The reader will note that by Goldsmith’s accounting, the US and global economy is really just at the start of this fourth Industrial Revolution. Thus, discussions about what effects it will have are necessarily speculative. As he writes: “An overarching question is will automation reduce the number of jobs or will mechanization generate so many new positions employment, on net, rises?”

I confess that I am skeptical of phrasing the issue in terms of total jobs. An economist pointed out to me long ago that the single biggest determinant of the number of jobs in any economy is the population of the country. Thus, it seems to me as if the key issues are not about raw number of jobs, but about whether this industrial revolution will lead either to historically high levels of persistent unemployment, or to a pattern with usual levels of unemployment, but a higher share of workers in lower-wage jobs. Goldsmith argues the point this way:

This is the fundamental conceptual difference between the 4th IR and prior industrial revolutions during which advances in technology were considered skill-neutral—they improved the productivity and workplace outcomes for workers with different levels of formal education. The impact of this difference, skill-biased versus skill neutral, is profound, since it will undermine the job prospects and earnings for those with modest formal educational attainment—persons in the middle class—while advancing the economic situation for individuals who possess high levels of formal education.

At least to me, it isn’t obvious that the 4th Industrial Revolution is different in this way. I’ve been reading for decades that the 3rd Industrial Revolution involved “skill-based” technical change, and in this way helped to generate growing inequality of incomes since about 1980. In addition, the very limited evidence now available on effects of artificial intelligence tools in the workplace suggests that they can be especially valuable to lower-skill workers, rather than higher-skill workers. The underlying reason is that AI tools in effect can make pre-existing expertise more available to everyone, which is a bigger boost for those with less experience or lower skill.

But while it isn’t (yet) clear to me that AI will lead to bigger technological displacement of workers than previous industrial revolutions, it nonetheless remains true that–in a healthy economy which is ever-shifting and ever-evolving–some workers will find that the skills and experience they developed in an existing job are no longer valued as highly in the market. In response to this ongoing issue, I’ve argued in the past for “active labor market policies” (for example, here and here). The US currently emphasizes “passive” labor market policies like paying unemployment insurance or providing safety net support to low-income households, while “active” labor market policies would expand the government role in job search and training.

In particular, Goldsmith emphasized a potential role for the federal government as a coordinator and accreditation mechanism for “certificate programs.” As Goldsmith points out, private firms like Google have taken some prominent steps in this direction already:

Google rolled out the Google Career Certificate Program—a skill development initiative—in 2020 (Google, 2021; Hess, 2020). The Program offers Certificates in: IT Support, Data Analytics, Project Management, UX Design, and Android Development. The curriculum for each certificate was developed by Google and is taught by Google employees, solely online, using the learning platform Coursera. These certificate programs are self-paced, designed to be completed in 3–6 months, and admission does not require a college degree. It costs $49 a month to use the Coursera platform, and Google has funded 100,000 need-based scholarships for eligible applicants. In addition, Google awarded $10 million in grants to three nonprofits that partner with Google to provide workforce development to targeted groups including women, veterans, and underrepresented groups. The Google Career Certificates Employer Consortium (Google, 2024) includes over 150 U.S. companies including Deloitte, Target, and Verizon who consider Google Career Certificate graduates for entry-level jobs, which typically require a four-year college degree. Moreover, certificate holders have access to an exclusive job platform where they are fast-tracked when applying for jobs with Consortium employers. This initiative could be scaled up by the inclusion of additional firms and the government independently developing, or codeveloping, additional certificate programs, and leading the delivery.

To me, a main challenge for certificate programs is one of focus: each one needs to be laser-focused on specific skills, and not get loaded up with a bunch of other topics and skills that might be nice, but should be separated off into a different certificate. This kind of focus helps to keep down costs of the program and the time needed to qualify for the certificate, and thus will encourage workers to see it as a viable option.

A Federal Guarantee of Paid Vacation?

The rules governing the US labor market are just different from other high-income countries in some way. One difference involves paid time off. Here’s a figure from Betsey Stevenson, “A federal guarantee for earned paid time off” (Hamilton Project at the Brookings Institution, October 2024).

You will notice that in this comparison across various high-income countries, the US does not have a bar at all. Hmmm.

There various ways to respond to this table. One is to point out that the rules sketched here apply to full-time, full-year workers, and often only apply after the worker has been with an employer for several years. Stevenson writes:

These estimates report the statutory required number of paid leave days based on full-time, full-year workers. Many countries require a waiting period before paid leave becomes available. Canada and Japan have fewer days required for recent employees with the full amount of statutory leave being granted after several years of tenure with an employer. Only countries that statutorily require public holidays to be paid are listed as requiring paid holidays. However, in many countries that do not mandate pay for public holidays, workers often receive holidays off with pay or are compensated with another day off by custom or collective bargaining agreements.

Another way to respond is to wonder how these rules and laws about paid vacation and paid holidays interact with other kinds of paid leave. Stevenson notes that if we put together a graph with those kinds of leave, the US would still be the country with no bar at all on the graph: “In addition to these requirements for annual leave as part of employees’ compensation packages, most advanced economies also require additional amounts of paid sick leave. In addition, all advanced economies have national programs ensuring that people have access to paid family and medical leave …”

Stevenson sketches how a US policy along these lines might work: in her version, paid time off would accrue based on hours worked:

I propose that earned time off should accrue at a rate of one hour per 50 hours worked (2 percent of hours worked per week) in the first two years of the policy, increasing to one hour per 25 hours worked (4 percent of hours worked per week) after two years. In the first two years, workers must be able to accrue up to 40 hours a year; after two years, they must be able to accrue up to 80 hours a year. The reason for capping the earned leave is so that employers can simply offer full-time, full-year employees 80 hours a year (40 in the first two years), without needing to count hours. It is an administratively easy option.

Those interested in sorting through specific questions about such a policy will find a lot of answers in Stevenson’s article. Here, I’ll just note that there are roughly a jillion questions one might ask about a paid leave policy: for example, it’s not clear how it works for workers who change jobs frequently. Some high-income countries have developed an “insider/outsider” dynamic, where “insider” workers who have an attachment to a certain employer receive lots of benefits, but firms have an incentive to figure out ways of hiring that won’t make workers eligible for benefits, and a group of “outsider” workers develops without access to the benefits. I would worry that a paid leave policy will not actually be a guarantee for all workers, but instead will tend to leave out younger workers, as well as those with lower incomes and less connection to the labor force. In addition, labor market policies that work for much smaller countries (in the graph above, Iceland and Luxembourg each have a national population of less than 1 million) or countries with a much stronger tradition of unionization may not be well-suited for the enormous and widely varied US economy. US states, the “laboratories of democracy,” are already experimenting with various kinds of paid leave policies.

Of course, one answer to any practical questions about paid leave is that other countries have already in fact been doing it. But in addition, building political support for a paid leave policy in the United States is also a cultural issue. The typical American worker puts in more hours each year work more hours per year than workers in most other high-income countries; indeed, many American workers don’t take all of the paid vacation to which they are currently entitled. Stevenson argues:

Americans work more hours per year than workers in any other advanced country. Many Americans view hard work and long hours as a path to career advancement, personal fulfillment, and higher incomes. In a competitive job market, some Americans fear that taking too much time off could jeopardize their job security. They may worry about losing wages, falling behind, being replaced, or missing out on promotions and career advancement opportunities. Many employees have high workloads and believe that taking time off would lead to an unmanageable backlog of work. And, certainly many workers work long hours to make ends meet. In the U.S. there is a both a stigma against taking time off from work and no federal right to do so. … A national policy guaranteeing paid earned time off would shift attitudes toward time off from work and make earned paid time off a basic right for workers. In America, if you work hard and play by the rules, you should be able to afford to take a day off and not lose pay.

The National Childcare Program During World War II

The United States has had a nationwide childcare program at one time in its history: a temporary program during World War II. Tim Sablik of the Federal Reserve Bank of Richmond tells the story and summarizes some economic research on the topic in “When Uncle Sam Watched Rosie’s Kids: To support women working on the homefront in World War II, the U.S. government funded a temporary nationwide child care program” (Econ Focus: Federal Reserve Bank of Richmond, Fourth Quarter 2024).

As Sablik reminds us, only about 10% of married women reported working outside the home in the 1920s. A number of female-dominate professions like teaching had “marriage bars”–that is, a woman was barred from continuing in the job if she got married, on the basis that married women should be focused on raising children. But World War II changed the terms of the social debate. As Sablik writes:

Once the United States entered the war in late 1941, the country needed to mobilize both the personnel and the materials to fight a war on two fronts. While American men reported to training camps and shipped off overseas, government officials called upon women to support the production of tanks, planes, ships, munitions, and other supplies at home. According to a 1953 report from the U.S. Department of Labor’s Women’s Bureau, nearly half of all single women were already in the workforce prior to the war. But the labor force participation rate for married women was much lower — around 15 percent. For policymakers hoping to ramp up war production, the report’s authors observed, “Married women constituted the country’s greatest labor reserve.”

Many of these married women were also mothers, so bringing them into the workforce meant grappling with the issue of child care. During a 1943 hearing before the Senate Committee on Education and Labor, witnesses shared stories of children locked in cars or chained to trailers while mothers were at work. Factories reported an increase in absenteeism on Saturdays when schools were closed. Others expressed concerns about rising juvenile delinquency among school-age children left to their own devices after school and during the summer.

The legislative path seems to have worked like this. In 1940, Congress passed the the National Defense Housing Act, often called as the Lanham Act, aimed a building more housing. But by 1941, Congress had expanded the law so that its funding could support “any facility necessary for carrying on community life substantially expanded by the national-defense program.” In 1942, the House Committee on Public Buildings and Grounds agreed, without public debate or legislation, that the funding could also be used for child care. By 1943, Lanham act funding was available for 1,150 nurseries. At the peak in 1944, there were 3100 centers with about 130,000 children enrolled across the country.

Sablik draws on research from Chris Herbst for some descriptive details:

Lanham nurseries provided care for children from ages 2 to 5, while child care centers looked after school-age children before and after school and during the summer. Consistent with the Children’s Bureau’s recommendations, few if any Lanham facilities provided care for children under the age of 2, despite expressed demand from working mothers with young children. According to Herbst, it was typical for preschool children to spend 12 hours per day at the nurseries. When school was in session, older children might spend a few hours before and after school. The availability of care also varied according to local need. In communities with factories operating 24 hours per day, centers were open at night.

To get the program up and running quickly, FWA [Federal Works Agency] administrators rented and reused existing buildings and relied on schoolteachers for staff. Federal agencies created a training program for Lanham teachers and volunteers, and some cities partnered with local universities to create their own training. Federal guidelines recommended keeping classrooms small, with a 10:1 student-to-teacher ratio, and Herbst found that most centers followed this recommendation. Students were served lunch, a snack, and even dinner in cases where centers were open late. That said, quality varied, as the FWA left operations largely up to the discretion of local administrators. In his article, Herbst cited the example of a center in Baltimore that had 80 children in one room with one bathroom, and those children had to cross a highway to reach the playground.

It’s not clear in a statistical sense how much this national child care effort actually increased the labor force participation of women. There was no rule limiting the centers to working mothers. The centers were typically established in places where the share of mothers who were working was already quite high. Three days after the Japanese surrender in August 1945, the program administrators announce that the program would be wound down. The expectation was that women would leave the workforce, freeing up the jobs for returning soldiers.

However, follow-up research “found lasting positive effects on children who grew up in areas with Lanham centers, including generally improved outcomes in high school and higher earnings in adulthood.” Given the extreme disruptions of civilian life during World War II, and many changes in the United States since then (like smaller average family size and higher education and incomes for parents), it would be unwise to extrapolate too readily from this earlier program. But the outcomes are nonetheless interesting.

For those who want a taste of the academic research on outcomes for children, useful starting points are:

  • Derrington, Taletha M., Alison Huang, and Joseph P. Ferrie. “Life Course Effects of the Lanham Preschools: What the First Government Preschool Effort Can Tell Us About Universal Early Care and Education Today.” National Bureau of Economic Research Working Paper No. 29271, September 2021. (Article available with subscription.)
  • Ferrie, Joseph P., Claudia Goldin, and Claudia Olivetti. “Mobilizing the Manpower of Mothers: Childcare under the Lanham Act during WWII.” National Bureau of Economic Research Working Paper No. 32755, July 2024. (Article available with subscription.)
  • Herbst, Chris M. “Universal Child Care, Maternal Employment, and Children’s Long-Run Outcomes: Evidence from the US Lanham Act of 1940.” Journal of Labor Economics, April 2017, vol. 35, no. 2, pp. 519-564. (Article available with subscription.)

Larry Summers on the Economics of AI

Joe Walker serves as interlocutor in “Larry Summers — AGI and the Next Industrial Revolution” (The Joe Walker Podcast, October 22, 2024). Here are a couple of points that caught my eye, but there is much more in the interview itself.

Here’s Summers on the long-term increase in economic output over time and the interrelationship with technology:

[T]he more I study history, the more I am struck that the major inflection points in history have to do with technology. I did a calculation not long ago, and I calculated that while only 7% of the people who’ve ever lived are alive right now, two-thirds of the GDP that’s ever been produced by human beings was produced during my lifetime. And on reasonable projections, there could be three times as much produced in the next 50 years as there has been through all of human history to this point. .. Of course, I think that this [AI] technology potentially has implications greater than any past technology, because fire doesn’t make more fire, electricity doesn’t make more electricity. But AI has the capacity to be self-improving.

There’s an interesting dynamic between strong technological advance in a given sector and the share of that sector in the economy. Imagine that it becomes much cheaper to make something, so that its price is falling sharply. The quantity demanded of the good increases, at least up to a point. In the context of the economy as a whole, the size of a given sector is determined by the quantity it produces multiplied by the price. If price keeps falling, and quantity demanded doesn’t keep rising as quickly, then the share of a high-productivity sector in the economy will decline. Similarly, as AI technologies plummet in price, it’s at least possible that the output share of AI technologies in the economy will decline as well. Here’s Summers:

[S]ectors where there’s activities where … there is sufficiently rapid growth almost always see very rapidly falling prices. And unless there’s highly elastic demand for them, that means they become a smaller and smaller share of the total economy. So we saw super rapid growth in agriculture, but because people only wanted so much food, the consequence of that was that it became a declining share of the economy. And so even if it had fast or accelerating growth that had less and less of an impact on total GDP growth. In some ways we’re seeing the same thing happen in the manufacturing sector where the share of GDP that is manufacturing is declining. But that’s not a consequence of manufacturing’s failure. It’s a consequence of manufacturing’s success. 

A classic example was provided by the Yale economist Bill Nordhaus with respect to illumination. The illumination sector has made vast progress, 8, 10 per cent a year for many decades. But the consequence of that has been that on the one hand, there’s night little league games played all the time in a way that was not the case when I was a kid. On the other hand, candlemaking was a significant sector of the economy in the 19th century, and nobody thinks of the illumination sector as being an important sector of the economy [today]. So I think it’s almost inevitable that whatever the residuum of activities that inherently involve the passage of time and inherently involve human interaction, it will always be the case that 20 minutes of intimacy between two individuals takes 20 minutes.

And so that type of activity will inevitably become a larger and larger share by value of the economy. And then when the productivity growth of the overall economy is a weighted average of the growth individual sectors, the sectors where there’s the most rapid growth will come over time to get less and less weight.

To put it another way, the economic issues about AI do not involve the capabilities of the technology in splendid isolation; instead, it’s how AI technology interacts with workers and consumers, with production and consumption of goods and services. Some tasks that workers currently do will be replaced, but possibilities for brand-new goods and services, as well as improvements in existing ones, will be created. I do not pretend to know how it will all work out in the decades to come, but I do know that in the globalized world economy, the AI cat is already out of the bag. Paul Romer (Nobel ’18) offered a pithy aphorism  a few years ago: ”Everyone wants progress. Nobody wants change.” Alternatively, one might say that some folks are fearful or hesitant about change until or unless society or government has full control over the direction of change and complete knowledge of its future effects–in which case, of course, it barely qualifies as “change” at all.

A Prescription for Fixing the US Healthcare System

Among the major issues not being discussed in the US presidential campaign are those facing the US healthcare system. The two main concerns are well-known.

One is high cost. The US economy spends about $12,500 per person on health care in 2022, according to the OECD. The second- and third-highest countries, Switzerland and Germany, spend about $8,000 per person on health care. Canada is at about $6,300 per person, about half the US level. The United Kingdom is even lower at $5,600 per person. I’m not in favor of cutting US health care spending by half or more! But high and rising health care costs for government programs like Medicare and Medicaid are part of what makes forecasts for US budget deficit so dire. And for those of us who get our health insurance through our employers, the high and rising cost of health insurance makes it harder to get increases in our paychecks, as well.

The other main concern is the number of people who do not have access to health insurance. Census Bureau statistics suggest that 11% of working-age Americans (ages 19-64) and about 6% of children did not have health insurance in 2023. Many of these households fall through the cracks of the current system: they don’t have jobs that provide health insurance, they have enough income that they may not qualify for Medicaid, but they don’t have enough income that paying for health insurance look affordable to them. Up to about half of the uninsured are actually eligible for health insurance at zero cost to them, whether private or public, but lack of knowledge and the administrative burdens of applying are too much for them.

So what might be done? The Summer 2024 Journal of Policy Analysis and Management has a useful back-and-forth that identifies some possibilities, issues, and tradeoffs. On one side, Liran Einav and Amy Finkelstein summarize their arguments in a book they published in 2023 last year with their plan, called We’ve Got You Covered: Rebooting American Health Care. But redesigning the US health insurance system involves some big leaps, and as they acknowledge, their plan may be politically impractical. Thus, Jason Furman discusses the possibilities of more incremental–but potentially still important–health insurance reform. Here are links to the point/counterpoint:

The Einav and Finkelstein plan focuses on the idea of giving all Americans access to a basic level of health care at no charge to them. They argue that when other countries have included out-of-pocket cost-sharing for patients–say, co-pays, co-insurance, or deductibles–the other countries also end up having copious and often complex exceptions: say, for pregnant women, veterans, the unemployed, those with lower income levels, and so on and so on. Rather than create what can easily become an administrative swamp for cost-sharing, they would drop the idea for this basic level of care. They argue that “sharing in universal coverage is on a collision course with itself.”

What would be included in this basic level of care? Linav and Finkelstein get a little fuzzy here, and start talking about “gray areas.”

Basic coverage must cover all essential medical care, including primary and preventive care, specialist care, and hospital care—both emergency and non-emergency. Much of what this means is obvious. Flu shots and appendectomies are in. Purely cosmetic plastic surgery is out. But there is also a large gray area of specific types of care where there are cases that can be made both for exclusion and for inclusion in basic care. Infertility treatment, dental care, vision care, physiotherapy, treatment of erectile dysfunction, various forms of long-term care—the list goes on and on. We deliberately do not weigh in here, other than to say that the starting point must be to define a budget for basic care—how much taxpayer money we are willing to devote to health care. Only then can we have a meaningful discussion about these gray area decisions. … [M]ost countries have a formal process for considering whether to cover new treatments under universal health care. We will need one too.

In addition to the question of what will be covered, there is also a question of how it will be covered. The social contract is about providing essential medical care, not providing a high-end experience. There are many non-medical aspects of care that may be desirable without being essential. The ability to see the doctor of your choice at your preferred timing and location, for example, or semi-private hospital rooms. This would be substantially limited under basic coverage. Basic coverage would likewise involve longer wait times for non-urgent care than what people with private health insurance or Medicare are currently accustomed to. Wait times would be closer to those experienced by Medicaid patients, or by veterans who receive their medical care through the Veterans’ Administration (VA).

Thus, the Einav-Finkelstein vision is that everyone would receive basic care through the same system, but they estimate that perhaps two-thirds of Americans would have supplemental insurance on top of that. To put it another way, employer-provided health insurance could pay part of the premium to the government to cover basic health care, and then the rest of the premium would be converted into top-up insurance.

They argue that we could “fulfill our social contract without tackling the other multi-trillion-dollar elephant in the room: the problem of high and often inefficient healthcare spending. … Which is a relief, since we don’t (yet) have the silver bullet for dramatically lowering healthcare spending while fulfilling the dictate to “do no harm” to the patient. Nor, we hasten to add, does anyone
else. Despite what you may have heard on TV. It’s indisputable that there is a lot of waste in U.S. health care. But the old adage about advertising is also true: half of spending is wasted, we just don’t know which half.”

Jason Furman was a top economic adviser to President Obama, and thus a supporter of the Patient Protection and Affordable Care Act of 2009, which reduced the number of those without health insurance by about 22 million, at an annual cost of more than $100 billion. But one political advantage of the legislation is that for many (not all!) people who had private or government health insurance, their avenues to health care were not much changed by the legislation.

As Furman points out, it’s easier to generalize about “basic care” than to define it in detail. It’s hard to imagine a politically practical “basic care” system that includes less than Medicaid–and Medicaid already pays such low amounts that many health care providers refuse to take additional patients. How “basic” could “basic” be? And are Americans willing to tolerate “basic”? Furman notes:

As Einav and Finkelstein discuss at length, much of what is provided by the health system is “amenities,” which cost money and resources but do not contribute to better health outcomes. This distinction between the primary purpose and the amenities is rarely made in other spheres. For example, imagine a management consultant studying the $150 billion annually spent on hotel rooms in the United States. They might conclude that about $125 billion of that sum was wasted because hostels could have provided the same shelter, with a bed, access to toilet, and showers, at a much lower cost. But this recommendation would miss the point.

Furman argues for cost-sharing when it comes to health care expenses, on the grounds that people need to have some connection to what they are actually paying for health care, because if they don’t do so, they aren’t likely to think about tradeoffs. He writes: “The financing of healthcare is already very opaque with a typical family of four spending about $32,000 annually but possibly only noticing the about $3,000 they pay out of pocket or maybe also the about $6,000 they contribute to the premium for their plan. The rest of the money is in the form of foregone wages (the incidence of the employer contribution for health insurance) and taxes for healthcare.”

He points out that cost-sharing for health care expenses in some form is common across other countries. Indeed, the existing level for cost-sharing on health care, as a share of household consumption spending, is actually not that different in the US than in many other countries.

Furman writes:

One thing I learned from working on the ACA [Affordable Care Act] was that no one had all or even most of the answers, especially when it comes to delivery system reform … The answer is to take more seriously how to put in place systems and processes that can discover better answers over time, not simply assume that one knows them in advance—let alone knowing whether they will be politically or socially sustainable. …

But it is also wrong to ignore the fallibility of government or the people that implement its policies either. Medicare is a poorly designed insurance plan that would not even qualify as insurance under the ACA mandate because of its unlimited cost sharing (despite having first dollar coverage for many services), as a result it is basically unusable as a sole insurance plan—with 90% of beneficiaries
supplementing it with something else. It took the federal government decades to add a prescription drug benefit to Medicare, an omission that would have driven any private insurer out of business. And even when government plans have come in under cost, like the prescription drug benefit, a big part was because of innovations that were unanticipated or underestimated by the creators of these plans, like tiered formularies for prescription drugs.

I do not know the answer, but it should involve some of what is best about markets while remedying what is worst about them … It also needs to do what is best about the government while building in a process of innovation and change, something like the Center of Medicare and Medicaid Innovation Center. And the most vexing issues in healthcare are how to balance its cost against the many other desires and priorities people have—so a mechanism that makes costs and tradeoffs more transparent is essential to ensuring the competition and innovation process will lead to better results over time.

I don’t have a one-size fits all answer for how to fix the US healthcare system either. But I do think it’s important that people have a better sense of what health insurance actually costs. One proposal with cost estimates from the Congressional Budget Office would be to look at the range of employer-provided health insurance across employers, and figure out the median amount provides, which CBO estimates at “$8,900 a year for individual coverage and $21,600 a year for family coverage.” That median amount would continue to be excluded from taxation. But for health insurance plans costing more than this amount, the additional amount would be counted as income to the worker. The CBO estimates that this would raise more than $100 billion per year by 2027.

Employment Levels for Low-Income Women and Men Since the 1990s

There is a widespread sense, not just in the United States but in European countries like Denmark, that government assistance for low-income households should be linked to participation in the labor force. So what do employment patterns look like for low-income American–that is, those who are also eligible for various means-tested government assistance programs? Lisa Barrow, Diane Whitmore Schanzenbach, and Bea Rivera lay out some basic data patterns in “Work, Poverty, and Social Benefits Over the Past Three Decades” (Federal Reserve Bank of Cleveland, Working Paper No. 24-22, October 2024).

The next two figures show employment rate over time, first for women and then for men. The employment rates are then divided up by whether the household is low-income, and whether the household includes children.

For women, the solid lines show the the employment rate for women with no children (green line) and the employment rate for women with children (orange line). You can see that back in the early 1990s, the employment rate for women without children was substantially higher, but gap shrinks, and since about 2010 the employment rates are about equal.

The dashed lines focus on those with low incomes (defined here as less than 200% of the federal poverty line). Employment rates for those with low incomes are below the rates for the entire population. Back in the early 199s, low-income women without children had higher employment rates. But after the passage of the welfare reform act back in 1996 under the Clinton administration–a law that emphasized work requirements for welfare recipients–low-income children with women consistently have higher employment rates than low-income women without children.

The next figure shows the patterns for men. The solid lines show overall employment rates for men without children (green line) and for men with children (orange line). Unlike the situation for women, where these two lines were much the same, men without children have much lower employment rates, and the gap is growing. The dashed lines focus on low-income men. Low-income men with children have much higher employment rate than low-income men without children–and the gap for men is much larger than the gap for women shown above.

One other pattern is worth noting here. Many of these lines show a relatively large decline in employment rates from the late 1990s up to about 2010–but since about 2010 (the tail end of the Great Recession), the employment rates for both the entire population and the low-income population are either flat or even up a little bit.

So what explains these patterns? In particular, what might explain the In the working paper, Whitmore, Schanzenbach, and Rivera consider a bunch of factors, including demographic factors like family composition, education, and race/ethnicity, and public policy factors like a shift away from cash welfare payments for the poor and toward payments that are delivered through tax credits and thus linked to work, like the Earned Income Tax Credit and the Child Tax Credit. They write:

We find that the characteristics of low-income adults have changed over time. They have become more highly educated, less likely to be married, and the share that is Hispanic has increased. We investigate to what extent these shifts in characteristics can help explain changes in employment and find that little employment change can be explained by these factors. …

Our results contribute to a growing literature documenting the shift in the structure of social benefits for non-elderly adults, especially those with children, to reward and encourage work. Low-income families with children and substantial earnings have received more income—both in levels and as a share of their total incomes—from social benefits in the last decade than they did 30 years ago. On the other hand, social benefits programs are little changed for low-income families without children.

Of course, any working paper is far from the final word on a big subject. But the patterns are consistent with a belief that the shift in social safety net programs toward adults that work, in households with children, is encouraging work effort for that group.

The Puzzle of Japan’s Economy: When Productivity Gains Are Outside National Borders

In total size, Japan’s economy is fourth-largest in the world, just behind Germany for third-largest. In per capita GDP, Japan is ahead of Spain and South Korea, although well behind Italy and France. With a life expectancy at birth of 84 years, ,Japan has one of the highest levels in the world. Clearly, Japan has some considerable economic strengths.

But there is a puzzle here. In the late 1980s and early 1990s, Japan’s economy experienced a dramatic boom-and-bust in stock market and real estate prices. For example, the Nikkei stock market index rose from about 10,000 in 1984 to almost 38,000 in 1989, and then fell back to 17,000 by 1992. In the early 2000s, the Nikkei had fallen to under 10,000, and it was under 10,000 in 2012, too. Since then, the Nikkei has climbed again, and in early 2024–35 years later–it exceeded the level it had reached in 1989. In short, Japan’s economy crashed about three decades ago and growth has been slow and halting ever since.

Japan faces ongoing demographic challenges, too. The “working-age” population in Japan, from ages 15-64, peaked back in the mid-1990s at about 87 million, but now has fallen to 73 million. Japan’s population is aging. Back in the mid-1990s, Japan had about five working-age people for every person over the age of 65; now, Japan has one working-age person for every person over the age of 65.

Japan has very high levels of government debt, too. According to IMF calculations, the US ratio of government debt to GDP ratio is about 120%; Japan’s ratio of government debt to GDP is about 250%.

So how is Japan’s economy adjusting to these underlying factors? Dany Bahar, Guillermo Arcay, Jesus Daboin Pacheco, and Ricardo Hausmann explore some of the underlying patterns in “Japan’s Economic Puzzle” (The Growth Lab at Harvard Kennedy School, CID Faculty Working Paper No. 442, revised July 2024). They focus in particular on the evolution of Japan’s interaction with the global economy. They write:

Our main findings put together suggests that in response to the enormous domestic challenges in the economy, Japanese firms have sought to offset these constraints by expanding their operations internationally through foreign investments. By investing in foreign markets, Japanese firms can access larger and more diverse labor pools, enabling them to continue growing despite domestic labor shortages. These Japanese investments abroad, accompanied by the unique accumulated knowledge of the Japanese economy (i.e., technology, best practices, and more), has resulted in very high returns to these investments. The subsequent increase in wealth to the economy has inevitably resulted in a domestic expansion into non tradable, less productive, sectors of the economy which lowers aggregate productivity growth. Overall, we argue, the sluggish productivity growth in Japan is a result of these dynamics.

In the last few decades, Japan’s share of global exports of goods has plummeted, in substantial part because of China’s rising share of global exports of goods. From a US perspective, a lot of the imported goods that were made in Japan back in the 1970s and 1980s are now being made in China.

But when it comes to exports of services, Japan has continued to do well. In particular, the “service” that Japan exports is often intellectual property, which refers to licenses to use Japanese patents elsewhere.

In addition, Japanese firms are building up their investments abroad. One way to think about this is that Japan’s firms are dealing with the declining labor force in Japan by finding workers in other countries. The authors write:

Japan has significantly increased its net foreign asset positions, particularly after the turn of the century. In fact, between 1996 and 2022, Japan nearly quadrupled the value of its assets abroad from USD 2.7 trillion to USD 10.3 trillion. An important driver of this growth is reflected in Japan’s stock of outward direct investment, which increased by a factor of almost 8 from USD 263 billion to USD 2.1 trillion during the same period. Moreover, the returns to those direct investments have grown significantly, too, with abnormal returns consistently much larger than for any other investment positions. The data shows that dividends stemming from direct investment abroad … [have] grown by a factor of almost 15 from USD 14 billion in 1996 to USD 206 billion in 2022.

Putting this together, the authors argue that Japan’s companies are involved in high productivity growth–it’s just that a lot of that productivity growth is happening outside the borders of Japan, in industries where Japanese firms end up exporting to the rest of the world. Japanese workers who work for a company that is involved in international trade and has rapid productivity growth have wages rising more quickly than those who work for companies in non-tradeable goods like retail, hotels, and services, which have lower or even negative productivity growth.

Is this Japanese economic model sustainable? The authors speak gently of “challenges.” With a declining domestic labor force and high-productivity firms operating abroad, Japan’s economic future seems intimately tied to a combination of technological and managerial know-how, along with global supply chains. Success for this economic formula requires that Japan’s export-oriented firm remain on the technology frontier, which can be easier said than done in the 21st century global economy. Also, as the size of Japan’s domestic workforce declines, it would help if Japan’s low-productivity domestic production sectors could find ways to make better use of technology to improve productivity.

A Surge in US R&D Spending

From a conceptual point of view, the economics of research and development is the opposite of pollution. When a private party carries out an economic activity that leads to pollution, the private party gets the economic benefit, but the broader society bears the costs (in the jargon, a “negative externality”). However, when a private party carries out research and development, the private part benefits to some extent, but the additional benefits of the new knowledge spill over to the rest of the economy (a “positive externality”). Thus, it makes sense to have public policies that discourage pollution, but that encourage research and development.

I’ve long and frequently argued for a substantial increase in US R&D spending (for example, here, here, here, and here), so it seems appropriate to note that there has been a substantial increase in the last decade, driven by US business spending on R&D. Here are some graphs showing overall patterns of US R&D spending from “Trends in U.S. R&D Performance” (National Science Foundation, May 2024).

For a long time, I told the story of US R&D funding in this way: There was a big run-up in US R&D spending in the 1950s and into the 1960s, driven largely by US government R&D spending typically aimed at military and space programs. However, federal R&D spending as a share of GDP began to sag after the 1960s, while business R&D spending as a share of GDP increased.

These two forces more-or-less counterbalanced each other for several decades, so that total R&D spending as a share of GDP (blue line) hovered between about 2.3%-2.7% of GDP from the early 1980s up through about 2013. But then you see a substantial change. Although federal support for R&D as a share of GDP continues to lag, business spending on R&D takes off, and pushes US R&D spending up to about 3.5% of GDP. With a US GDP in 2024 at around $29 trillion, a 1% rise in GDP means that about $290 billion more is being spent on R&D this year than would have been spent if R&D had stayed in that lower range.

The same pattern emerges if you look at dollar amounts of R&D spending. The lines here are not adjusted for inflation, or for economic growth, so the picture is in some ways a little misleading. But you can see that government and industry R&D spending were similar in size as recently as the second half of the 1980s. Since then, the rise in US R&D spending has been driven by business R&D spending–especially in the last decade or so.

This figure shows the share of R&D funding coming from different sources. As you can see, the share of R&D coming from business spending has risen sharply, and is now approaching 80%.

This figure offers an international comparison. The blue bars show national R&D spending in absolute levels (measured on the left-hand axis): thus, the big economies like the United States, the EU-27 taken as a whole, and China have the biggest bars. The red diamonds show national R&D spending as a share of GDP (measured on the right-hand axis). As you can see, a couple of smaller economies, South Korea and Taiwan, spend more as a share of GDP than the United States. But in general, the US has both the highest level of absolute R&D spending and also–thanks the recent run-up in R&D spending by business–one of the highest levels of R&D spending as a share of GDP. To me, the gap between the US and the EU-27 economies in R&D spending is especially striking.

Finally, one concern sometimes expresses is that when it comes to R&D, business-funded spending can be more focused on the “D” of developing products for near-term sales in the market, and less on the “basic” research that can be so important for longer-term progress. On this point, here’s a figure from the “Analysis of Federal Funding for Research and Development in 2022: Basic Research” (National Science Foundation, August 15, 2024).

As the figure shows, government used to dominate funding for basic research, accounting for 70% of the total in the 1960s and 1970s, and for 60% of the total as recently as the early 2000s. But the rise in overall US business R&D spending has “basic” research spending by business as well. Now, it looks as if basic R&D spending by business is about to exceed that from the federal government.

One of the recent puzzles of the global economy is that the US economy seems to just keep growing, albeit at a moderate rate, while many other high-income economies like those in Europe, as well as Japan and Canada, seem stuck in slower growth patterns. My guess is that the surge in US R&D spending is part of the explanation for that pattern. Moreover, a higher level of R&D spending by business suggests that US firms are seeing opportunities to capitalize on their R&D efforts in the ever-changing and evolving US economy, while many European firms may not be seeing the same willingness and opportunity for change within their national markets.

A Nobel for Acemoglu, Johnson, and Robinson: Institutions and Prosperity

The Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel 2024 has been awarded to Daron Acemoglu, Simon Johnson and James Robinson “for studies of how institutions are formed and affect prosperity.” Each year, the Nobel Committee helpfully publishes both a “Popular information” overview of of the award and a “Scientific Background” essay that goes into greater depth. The Popular Information starts with the kind of basic fact about the world we live in that demands attention.

The richest 20 per cent of the world’s countries are now around 30 times richer than the poorest 20 per cent. Moreover, the income gap between the richest and poorest countries is persistent; although the poorest countries have become richer, they are not catching up with the most prosperous.

Pause for a moment to contemplate that 30-fold difference. When discussing differences in average incomes between, say, the US and France or Sweden or Japan or other high-income countries, one can suggest reasons why the differences in income levels may not reflect actual differences in the underlying standard of living for the average person. But when the difference is 30-fold, it means that the lower-income locations have less health, less education, less living space, less leisure, and dramatically less access to the banquet of goods and services available in high-income countries. It also means that many of the people in the lower-income countries will not be fans of a “de-growth” agenda: instead, would like to have–either in their own country or by migrating–a standard of living 30 times higher than they have at present.

What factors can explain these extraordinarily large differences? You can point to geographic factors like arable land, natural ports, navigable rivers, natural resources, and a temperate climate. But as you enumerate possible reasons, you find yourself pointing to ways in which some countries have been able to build economies based on innovation and technology, which in turn are based on widespread education, infrastructure, a sound financial system, and a rule of law. In a word, you find yourself talking about “institutions.”

Some of the most vivid effects of institutions are visible in satellite photos, like the pictures of the Korean peninsula at night, with lights shining from South Korea and North Korea nearly in darkness, or daytime pictures of the border between Haiti and the Dominican Republic, where the Haitian side of the border is denuded and deforested by poor people desperate for firewood, while the Dominican remains verdant.

But “institutions” is so broad a term that it’s not immediately clear what it includes or what it leaves out, so it’s not clear how to measure it. It’s also not clear how growth-promoting institutions are formed, and whether the institutions precede economic growth, or co-evolve with growth, or result from growth. It’s not clear whether institutions that accompany economic success in one place can be transplanted to other locations.

These questions are hard to tackle such that the argument is based on quantitative evidence, not just storytelling. Economist have been trying for a long time: indeed, the Nobel Prize in economics back in 1993 was given to Robert W. Fogel and Douglass C. North “for having renewed research in economic history by applying economic theory and quantitative methods in order to explain economic and institutional change.”

So what’s new about the Acemoglu, Johnson, and Robinson analysis? The Nobel committee writes: “Broadly, their contributions are twofold. First, Acemoglu, Johnson, and Robinson have made significant progress in the methodologically complex and empirically difficult task of quantitatively assessing the importance of institutions for prosperity. Second, their theoretical work has also significantly advanced the study of why and when political institutions change. Their contributions thus entail substantive answers as well as novel methods of analysis.”

Here’s a glimpse these two types of contributions. On “assessing the quantitative importance of institutions for prosperity,” some of their best-known work is based on the historical experience of the colonialism. Acemoglu,  Johnson and Robinson argue in a broad sense that there are two types of colonial institutions: those that encourage property rights and those those that are “extractive.” They further argue that colonial powers will choose whichever approach provides the greatest wealth for themselves.

Consider two factors. One is whether the population of the area being colonized is more or less dense. If the population in dense, then the colonial power is more likely to use “extractive” institutions to take from the people; if less dense, than the colonizers were more likely to send people from their own country to live in the country being colonized, and those settlers would demand property rights and more inclusive institutions before they were willing to go. A second factor is the disease environment of the country being colonized. If the country was prone to diseases like malaria, then the colonizing country would be less willing to send settlers, and more likely to choose “extractive” institutions; if the country was less prone to diseases, then the colonizing country would be more likely to send settlers, who again would demand more inclusive institutions before they were willing to go.

The most striking economic research is able to make unexpected predictions. This work suggests that areas which were already fairly prosperous and heavily populated before colonialization were more likely to end up with “extractive” institutions, while areas with less previous success and lower population densities should end up with more inclusive institutions. Thus, over a sustained period of time like a century and more–if the institutions of colonialism matter–one should see a “reversal of fortune”: that is, the places that were more economically successful at the time of colonization should later be overtaken by the places that had been less economically successful.

This very brief sketch suggests the challenges of this research agenda. You need to collect 19th-century data on population densities and disease mortality. You need to collect data on many kinds of “institutions” and classify them as extractive or inclusive. You need to draw connections. Follow-up work also looks for historical other than colonialization in which this general approach might be applied.

If one looks at the period of time after colonialization, an obvious question is how institutions might be chosen and changed. Say that there is a government in power which uses extractive institutions to amass wealth for elite insiders at the expense of ordinary citizens. What might cause this to change? Acemoglu, Robinson, and Johnson argue that the heart of the difficulty here is a “commitment problem”–that is, it can be hard for political leaders to keep their promises. The Nobel committee writes:

A promise by the elite or an autocrat to implement welfare-improving reforms today that will benefit the populace tomorrow is typically not credible because the elite have an incentive to renege on their promise later and act in their short-term interest. Similarly, promises by those advocating for political reform, who are willing to compensate the current elite for agreeing to it peacefully, are not credible because the incentives to compensate the former elite once they are no longer in power are also not credible. Social conflict combined with the credibility problem can even cause the elite to block technological innovation and change, if such changes are perceived as threatening their hold on power.

This approach has been a baseline for future research, in part because it created a common framework for the earlier main explanations of how modernization occurred. Again, the Nobel committee explains:

It is instructive to put the contribution of Acemoglu and Robinson in perspective and relate it to the literature that already existed in the late 1990s. … Recall that the standard answer to why elites gave up the control of economic and political institutions was embodied in modernization theory and related explanations (Lipset, 1959, 1960). According to these theories, the process of socioeconomic development would eventually bring about democratization, essentially as a by product of economic progress. As societies become richer, this wealth brings about rising education, a more plentiful middle class, and gradually milder conflict over income inequality, factors which all favor democratization. A second approach, which challenged modernization (and other structural) theories, argued that democratization is instead the by-product of patterns of strategic interaction among political elites. Personal skills, luck, or strategic mistakes are, according to this approach, part and parcel of what democratization is about. … While the second view thus holds that democracy is usually granted or undermined from above, a third approach to explaining democratization, by contrast, points to the importance of social forces in society, most importantly different class actors (Moore, 1966). The key assertion in this tradition is that democracy is imposed from below by the people through popular mobilization (Rueschemeyer et al., 1992). According to this view, incumbent authoritarian elites would not care to enact reforms or bargain with the democratic opposition if they did not fear the masses or an imminent threat of revolution.

Acemoglu and Robinson integrated these three traditions by providing structural conditions (such as economic crises), relating these to preferences over institutions and social forces (such as the threat of revolution), and by providing the conditions under which strategic elites chose to reform (such as extending the electoral franchise). This is one of the reasons why their approach has become so influential.

Ultimately, they argued for a “window of opportunity” approach to the evolution toward democracy and more inclusive institutions. Much of the time, the commitment problems described above would block reform. But certain kinds of economic and political stresses could fracture the forces that blocked reform, at least for a time, and at least open a window for reform.

Again, one value of a theory is that it can make sense of fact patterns that might not otherwise be obvious. For example, later work argued that countries which enter democratization often experience fall in GDP beforehand. This pattern suggests that it isn’t economic growth which leads to democratization, but instead economic stresses breaking up existing coalitions.

Nobel prizes in economics are often given not because they provide a final answer, but because they launched volumes of future research. By that standard, the work by Acemoglu, Robinson, and Johnson surely qualifies for the award.

Interview with Paul Krugman: Economic Geography and Mysteries of Productivity

Cardiff Garcia of the Economic Innovation Group interviews Paul Krugman at The New Bazaar website (October 9, 2024). The interview has a number of points of interest. Here are two themes that caught my eye.

The study of “economic geography” focuses on why economics activity may tend to cluster, or to spread out, or to happen in certain locations. The general theory is that there are “agglomeration economies,” which refers to the idea that clusters of certain kinds of firms, workers, and suppliers benefit from being close together, perhaps with clusters of consumers too. But on the other side, clusters of economic activity can also lead to congestion, crime, even disease. Thus, there is a push and pull of the clustering of economic activity over time. In the last few decades, should improved information and communications technology lead economic activity to spread out, or to become more centralized? Krugman notes:

I mean the idea that agglomeration economies of one form or another exist is obviously not new … There’s forces pulling things together and forces pulling them apart. And the balance can tip one way or another. In fact, over the past 200 years, it has tipped first one way, then the other, then back again.

And the modern forms of agglomeration are different from the ones that prevailed in the 19th century. I like to say that the models that I was writing down 30 years ago had this kind of steampunk feel. They all kind of were very much focused on manufacturing and on industrial clusters, and we all lavished attention on these great stories — like the detachable collar and cuff industry of Troy, New York, and that sort of thing. And which mostly have gone away in the United States, although not totally — they do exist to some extent even in manufacturing, but these days if you really want to find old style industrial clusters, you go to China.

So if you actually look at — there’s a variety of measures — but basically, in the United States, there was a lot of regional convergence, convergence in incomes, convergence in basically regions becoming more similar, from the 1920s up until about 1980. Then in 1980, they started pulling apart again, and you started to see metropolitan areas with highly educated workforces pulling in even more educated people, pulling in even more of the information economy — and stranding regions that didn’t have those preconditions. … [F]for prep for a future conference, I’ve been looking at just kind of within New York State, and looking at greater Buffalo versus greater New York City. Buffalo was not all that poor or backward compared to New York City in 1980, and now it’s vastly poorer. …

So we’re back in a world where we have these extremely localized clusters. And modern communication technologies, Zoom, work from home, they make it possible to do some things without being in the place, but they’re not a full substitute. And in a rich society other things tend to matter. Particularly for high skill, high pay workers, you need amenities. High tech workers are not going to move to someplace in the middle of the country, even if they have excellent internet access, because where are the good restaurants? Where are the live concerts? So in some ways the fact that we’re rich enough that people can make decisions on that basis matter. So I think we’re in some ways back to the kind of unequalizing development that we had in the late 19th century

While it’s agreed (among economists) that productivity growth is essential to growth of the standard of living, the exact recipe for sustained productivity growth remains elusive. Indeed, standard calculations of productivity are carried out by trying to measure what part of overall output growth can be accounted for by changes in hours worked, skill level of workers, and capital investment–and then “productivity” is measured as the leftover or residual amount of output growth that could not be explained by the other factors. Because the economist Robert Solow did some of the classic work in this area, the productivity residual is sometime called the “Solow residual.” Here’s Krugman on the mystery that is productivity growth:

But if you look at a chart of U.S. potential GDP growth over the past 50 years — we had some pretty big political swings in there, big changes in tax policy. If you didn’t know that there have been changes of administration, changes in tax policy, you would never guess. It’s pretty much just a flat line. The reason is that economic growth is largely driven by the Solow [productivity] Residual. And who knows how to make that change very much.

If someone says “I have a policy that could raise potential GDP growth by a quarter of a percentage point,” I’d say, “okay, I could possibly believe that, show me the details.” If somebody says they could raise it by one percentage point, I think that’s crazy. Nobody knows how to do that.

My favorite Bob Solow quote, although there are so many of his … he was actually talking about Britain lagging behind post World War II, but it applies to lots of things, he said, “every attempt to explain this ends up in a blaze of amateur sociology.” That, in the end, we really just don’t know very much about why countries have different rates of economic growth. We still don’t know why productivity slowed down in the early ’70s. We kind of know why it had a bump for a while around IT [information technology], mid ’90s to mid 2000s. But ultimately, talking about innovations, there’s a big mystery now. It feels like we’ve had a lot of technological change these past 16, 17 years, and yet, if we believe our numbers, which maybe we shouldn’t, but if we believe our numbers, total factor productivity growth has been really pretty lousy for that whole period.