In Hume’s spirit, I will attempt to serve as an ambassador from my world of economics, and help in “finding topics of conversation fit for the entertainment of rational creatures.”
The final product of the poet and writer Maya Angelou (1928-2014) is often exquisite, and part of her genius is her care over language. Here’s a comment about self-editing and being edited from a 1990 interview with George Plimpton (Paris Review, “Maya Angelou, The Art of Fiction No. 119, Issue 116, Fall 1990).
I write in the morning and then go home about midday and take a shower, because writing, as you know, is very hard work, so you have to do a double ablution. Then I go out and shop—I’m a serious cook—and pretend to be normal. I play sane—Good morning! Fine, thank you. And you? And I go home. I prepare dinner for myself and if I have house guests, I do the candles and the pretty music and all that. Then after all the dishes are moved away I read what I wrote that morning. And more often than not if I’ve done nine pages I may be able to save two and a half or three. That’s the cruelest time you know, to really admit that it doesn’t work. And to blue pencil it. When I finish maybe fifty pages and read them—fifty acceptable pages—it’s not too bad. I’ve had the same editor since 1967. Many times he has said to me over the years or asked me, Why would you use a semicolon instead of a colon? And many times over the years I have said to him things like: I will never speak to you again. Forever. Goodbye. That is it. Thank you very much. And I leave. Then I read the piece and I think of his suggestions. I send him a telegram that says, OK, so you’re right. So what? Don’t ever mention this to me again. If you do, I will never speak to you again. About two years ago I was visiting him and his wife in the Hamptons. I was at the end of a dining room table with a sit-down dinner of about fourteen people. Way at the end I said to someone, I sent him telegrams over the years. From the other end of the table he said, And I’ve kept every one! Brute! But the editing, one’s own editing, before the editor sees it, is the most important.
Remember, this quotation is just Angelou talking, not Angelou after a bout of rigorous self-editing. As a person, I’m not confident that I could ask Angelou to change even a punctuation mark; as someone who has worked as an editor for several decades, albeit for an academic journal of economics, I confess that I would be unable to avoid offering suggestions.
My tradition on this blog is to take a break (mostly!) from current events in the later part of August. Instead, I pre-schedule daily posts based on things I read during the previous year about three of my preoccupations: economics, editing/writing, and academia. With the posts pre-scheduled, I can then relax more deeply when floating on my back in a Minnesota lake, staring up at the sky.
But that basic story lacks detail. James Feigenbaum and Daniel P. Gross have been digging into two aspects: 1) What happened to the women who were displaced from switchboard operator jobs; and 2) for AT&T, what determined speed and timing of investing in automation to replace switchboard operators?
Feigenbaum and Gross tackle the first question in “Answering the Call of Automation: How the Labor Market Adjusted to Mechanizing Telephone Operation” (Quarterly Journal of Economics, August 2024, 139: 3, pp. 1879–1939). They focus on the period between 1920 and 1940, with data on 3,000 cities. During this period, over 300 cities switched to mechanized switchboards. You can then compare the labor market in cities that switched sooner, later, and not at all. during this period. You can then compare the labor market for women across these different situations. They find:
As a first step, we show that after a city was cut over to mechanical [switchboard] operation, the number of 16- to 25-year-old women in subsequent cohorts employed as telephone operators immediately fell by 50% to 80%. These jobs made up around 2% of employment for this group, and even more for those under age 20—and given turnover rates, this shock may have foreclosed entry-level job opportunities for as much as 10% to 15% of peak cohorts. The effect of this shock on incumbent operators was to dis- possess many of their jobs and careers: telephone operators in cities with cutovers were less likely to be in the same job the next decade we observe them, less likely to be working at all, and conditional on working were more likely to be in lower-paying occupations. In contrast, however, automation did not reduce employment rates in subsequent cohorts of young women, who found work in other sectors—including jobs with similar demographics and wages (such as typists and secretaries), and some with lower wages (such as food service workers). … Though wage data for this era are more limited, using available data we also do not find evidence that local labor markets re-equilibrated at significantly lower wages.
The stability of both employment rates and wages is consistent with demand growing for these categories of workers in other sectors of the economy—and, in turn, with the predictions of Acemoglu and Restrepo (2018) , who suggest that firms will endogenously develop new uses for labor when automation makes it abundant. Buttressing this interpretation, our evidence indicates some occupations expanded to new sectors of local economies after cutovers—that is, the emergence of new work (Autor et al. 2024 ). Taken together, these results suggest that although existing workers may be exposed to job loss, local economies can adjust to large automation shocks over medium horizons.
The overall message is a conventional one for economists. Yes, labor markets do get disrupted, sometimes severely. There is a case here during the transition for government programs to provide some combination of unemployment insurance and training for other jobs. But the ultimate answer is growth of other employment opportunities.
Feigenbaum and Gross discuss the automation of switchboard operations from the perspective of AT&T in “Organizational and Economic Obstacles to Automation: A Cautionary Tale from AT&T in the Twentieth Century” (Management Science, published online in advance of being assigned to a specific issue, February 27, 2024). They point out a puzzle of timing: mechanical call switching technology is invented in the 1880s. However, AT&T doesn’t install the first dial telephones until 30 years later at the Chesapeake & Potomac Telephone Co. in Norfolk, Virginia, in 1919, and the process of phasing out all of the switchboard operators isn’t completed until 1978. Why did it take so long?
Telephone systems were initially designed to have operators physically connecting calls—a task known as “call switching”— putting them at the center of both the telephone network and AT&T’s production system. Manual switching, in turn, shaped choices and activities across the business, including service offerings, plant and equipment, operations, prices, accounting, billing, customer relations, and more.
Although manual switching served early telephone networks well, expansion revealed its limits, as its complexity rose quickly in large markets with billions of possible connections, and switchboards became system bottlenecks. As AT&T grew, its service quality thus fell, and operator requirements exploded: by the 1920s AT&T was the largest U.S. employer, with operators over half its workforce. Company records show the limits of manual switching were known as early as the 1900s, when automatic technology was already being tested—yet it took AT&T several more decades to adopt it widely. We show in this paper that automation was hindered by interdependencies between call switching and the rest of AT&T’s business: the mechanization of call switching required complementary innovation and adaptation across the firm, which were only resolved over time.
In retrospect, one wonders if AT&T would have moved faster to mechanical switching if it wasn’t a monopoly! But there is also a more charitable lesson here. Big innovations require widespread organizational adjustment. Such adjustments often require a substantial up-front investment–some of it monetary, some of it an investment in organizational change in business practices. There are firms and government agencies out there that are still adjusting to information technology and the web, and just starting to make widespread use of tools that have been around for a decade or more. Even if the new artificial intelligence innovations turn out to be the greatest thing since sliced bread, it will take years and decades for them to filter through the economy–indeed, after discovering a new technology, one often has follow-up discoveries of additional uses for that technology.
Russia had of course already invaded Ukraine back in 2014, but in February 2022 it dramatically escalated the earlier invasion.
The U.S. and Ukraine’s allies met Russia’s invasion two years ago with an unprecedented set of sanctions. They put a price cap on Russian oil exports, froze $300 billion worth of Russian foreign exchange reserves, and severed many of the links between Russia’s financial institutions and the rest of the world.
However, Russia’s economy seems to be doing pretty well. The IMF recently estimated economic growth in Russia of 3.2% this year. Russia seems to have found alternative pathways to export its oil and to buy imports, and the government has been stimulating the economy with higher defense spending in response to the war with Ukraine.
So have the economic sanctions against Russia essentially failed? Or do they need to be strengthened and redoubled? The Brookings Institution held a seminar on this subject in May, with nine experts presenting various views. (The quotation above is from the Brookings website, introducing the symposium). For example, one perspective is that if Russia was more connected to the international financial system, it might be easier for capital to flee Russia’s economy–which could be more painful to Russia’s power structure.
Here, I’ll focus on two of the essays with opposing views. Peter E. Harrell asks, “Has the US Reached `Peak Sanctions’?” As he notes, sanctions have become very popular foreign policy option. After all, they offer at least the appearance of doing something forceful, but without using military force, and while appealing to protectionism and anti-global instincts:
The last decade has been a golden age of sanctions. The U.S. dramatically expanded the number of people, companies, and foreign government instrumentalities it sanctions each year: In 2022 and 2023 the U.S. imposed more than three times as many sanctions annually as it did a decade earlier. U.S. export controls show a similar trend. By the early 2010s, sanctions had become a tool of “first resort” for a dizzying array of international policy problems from the Iranian nuclear program to global human rights abuses. Sanctions policymakers have been remarkably innovative, designing new ways to target trade and financial flows. The question today is whether the popularity and utility of these measures will continue or whether we have reached “peak sanctions” and will see a future of declining impact, even if the measures remain politically popular.
The sanctions do seem effective when targeted at changing the behavior of individual companies, and the effects of sanctions can be see in trade flows. But when it comes to whether sanctions can actually achieve foreign policy goals, the effectiveness is pretty mixed. Sanctions against Russia, Iran, Venezuela, and China–or over the long-run, sanctions against Cuba–haven’t noticebly altered the policies of those countries.
Harrel notes: “Political scientists and historians consistently find a sanctions success rate of around 40%.” But that partial success rate is based on historical data, and the more recent wave of sanctions seems less effective.
Rhetoric like “maximum pressure,” “crippling,” and “harshest ever” sanctions, combined with promises that sanctions can bring about profound changes in adversaries’ behavior at little or no cost to the U.S., set unrealistic expectations and make it hard to change course even when, as with Cuba, policy has objectively failed. Much as American leaders have learned to avoid over-promising military interventions, political leaders should not over promise the outcomes of the economic weapon.
On the other side, perhaps tightening sanctions against Russia still farther could have a greater effect. For example, Harrell discusses “secondary sanctions,” where the US and its allies would not just sanction Russia, but would also sanction any other country that was helping Russia get around the sanctions with its own trade or commercial ties. In this spirit, Torbjörn Becker and Yuriy Gorodnichenko argue “Time for a complete ban on economic ties with Russia.” They write:
[T]he sanctions imposed on Russia so far have been numerous (more than 4,500 entities and 11,500 individuals) and introduced over a long period of time. This makes the monitoring and implementation of sanctions complicated. The delayed implementation provided Russia with more time to adjust to and circumvent sanctions, while the effects of sanctions have been spread over time, often with a significant lag. The argument that has been used for this approach to sanctions is that the West should avoid the use of “nuclear” options; keep some sanctions “powder dry”; and avoid “high costs” in the sanctioning countries. Unfortunately, the slow pace, complex regulations, and patchy enforcement of implementing serious sanctions have provided Russia with some economic breathing space and thus blunted the effectiveness of sanctions.
They argue that the existing sanctions are working better than economic statistics from the IMF or the Russian government reveal: for example, Russian inflation is probably considerably higher than the officially announces 7% rate. They propose that Russia should get the North Korean treatment: that is, a ban on all trade, with a few limited exceptions for food and medicine.
Exports to Russia from the EU countries, for example, have fallen by half as a result of the existing sanction–but that still leaves half. A few hundred Western companies have left Russia, but a few thousand remain. Becker and Gorodnichenko argue that a complete ban, with a limited number of small exception, really can plug many of the loopholes in the existing trade sanctions with Russia. For example, they write:
The “full sanction” mode also allows Western companies to get out of contractual obligations with Russia. In a nutshell, the companies can claim force majeure and cancel their contracts with Russian counterparties. Western laws (e.g., the U.K. Sanctions and Anti-Money Laundering Act 2018) typically stipulate that companies cannot be held liable if their act of not meeting contractual obligations is in compliance with sanctions regulations. Furthermore, it will reduce the exposure of Western companies with Russian connections to mass civil legal action to compensate victims in Ukraine. In other words, the proposed approach would facilitate the exit of Western companies from Russia …
They recognize that Russia will still find trading partners for certain products: for example, exporting oil to China. But that said, a full embargo on trade with Russia means that all of the alternative deals happen under a cloud–and all of Russia’s alternative trading partners should be able to buy Russia’s oil at lower prices as a result.
For those who would like additional background on economic sanctions as a policy tool, I can recommend two articles from the Winter 2023 Journal of Economic Perspectives (where I work as Managing Editor) as useful starting points:
The Medicare program is 21% of all US health care spending, and 14% of the total federal budget. It provides health insurance for pretty much everyone over age 65. To understand the financing of the program, you need to know that it is divided into four parts. As Munnell explains:
Traditional Medicare is composed of two programs. The first – Part A, Hospital Insurance (HI) – covers inpatient hospital services, skilled nursing facilities, home healthcare, and hospice care. The second – Supplementary Medical Insurance (SMI) – consists of two separate accounts: Part B, which covers physician and outpatient hospital services, and Part D, which was enacted in 2003 and covers prescription drugs. The arrangements are slightly more complicated because Medicare also includes Part C – the Medicare Advantage plan option, which makes payments to private insurance plans that provide both Part A and Part B services. … Spending on Part D prescription drug benefits has been a roughly constant share of total spending over time. Each Medicare program has its own trust fund and its own source of revenues.
Part A is the Hospital Insurance fund, which is mostly funded by the 2.9% payroll tax on wages, but which also gets some revenue from an additional 0.9% tax on the wages of high-income workers and from a tax on higher-income Social Security recipients.. Part B is Supplemental Medical Insurance, which covers care by doctors and outpatient hospital care, is financed mostly by general tax revenues, but also with a contribution from insurance premiums paid by the elderly. Part C, the “Medicare Advantage” program which now covers about half of all Medicare recipients, is paid for out of the funds for parts A and B. Part D “is financed primarily by general revenues (73 percent) and beneficiary premiums (14 percent), with an additional 12 percent coming from state payments for beneficiaries enrolled in both Medicare and Medicaid.”
The usual headline about Medicare finances focuses on estimates for how long the Part A Hospital Insurance trust fund will remain solvent: in the 2024 report, the answer is 2036. But as this brief overview suggests, the Part A trust fund is only one slice of the program. As Munnell points out, the share of Medicare spending going to Part A has been declining:
Here’s an overview of all the sources of revenue for Medicare. As you can see, payroll taxes and insurance premiums cover only some of the cost, with general tax revenues taking up most of the slack.
As the figure also shows, Medicare costs as a share of GDP have been rising substantially as a share of GDP. Also, with a combination of a growing over-65 population and health care costs that keep rising faster than the general rate of inflation, Medicare spending is projected to be a rising share of GDP for the next couple of decades, as well.
Indeed, Munnell points out that 30 years down the road, covering the health care costs of the elderly is likely to cost more than providing the elderly with income through Social Security. Yes, health insurance for the elderly is good thing. But one suspects that among the lower-income elderly in particular, we are reaching a stage where some of them would prefer a little less health insurance and a little more cash in hand.
The Tax Cuts and Jobs Act of 2017 may seem like past history, but as William Faulkner has one of his characters say in Requiem for a Nun, “The past is never dead. It’s not even past.”
For economists, the 2017 tax las is especially interesting because it’s the most sweeping change in US tax code since the Tax Reform Act of 1986, so it offers a chance to study and revisit many topics with new evidence. But for policymakers, perhaps the key point is that many provision of the 2017 tax law were passed with an expiration date at the end of 2025. Thus, under current law, unless Congress revisits US tax law next year, large parts of the tax code will snap back to the 2016 rules at the end of next year.
This situation may seem strange, but here’s the background: When the Tax Cuts and Jobs Act of 2017 was passed into law, its Republican proponents in Congress had a majority in both the Senate and the House, but they did not have the 60-vote majority in the Senate needed to override a filibuster. They could pass their desired tax bill by simple majority vote, under what is known as a “budget reconciliation” procedure. However, under a long-standing precedent going back to the 1980s, laws passed in this way cannot increase the budget deficit outside a 10-year budget window. (This rule leaves Congress the flexibility to act in response to short-term events–like a pandemic or a recession–while still placing some restraint on actions with longer-term budgetary consequences.) Thus, many of the sweeping changes in the Tax Cuts and Jobs Act of 2017 were passed with an expiration date at the end of 2025.
The hope of supporters of the bill was that when 2025 finally arrived, the tax changes would be sufficiently entrenched that they could be extended at that time. But under the looming threat of a snapback to the 2016 tax law, it seems likely that all aspects of the US tax code will be up for political discussion in 2025. It’s worth remembering that even Democrats who were deeply unenthused about the 2017 tax law have made no meaningful efforts to change it during President Biden’s term of office.
The just-released Summer 2024 issue of the Journal of Economic Perspectives thus includes a set of five articles on aspects of the 2017 Tax Cuts and Jobs Act:
I won’t even pretend to try to summarize the articles here, but here are a few nuggets to give you a sense of what will be at stake in US tax policy in 2025.
The provisions of the Tax Cuts and Jobs Act of 2017, taken together, involve lower revenue. Estimating exactly how much lower is hard, given the need to disentangle the budgetary effects of the pandemic from the effects of the tax law. But Gale, Hoopes, and Pomerleau point out, one mainstream estimate from the Congressional Budget Office suggests that if the 2017 tax law was just extended as is in 2025, it would reduce federal tax revenues by about 1.1% of GDP by 2033. In the aftermath of the very high US government budget deficits that occurred as part of the response to the pandemic, which followed a little more than a decade after some very high US government budget deficits that occurred as part of the response to the Great Recession of 2007-08, the need to reduce the red ink from future budget deficits is greater now than back in 2017. Thus, parts of the 2017 law that reduce revenues are likely to come under especially close scrutiny.
Gale, Hoopes, and Pomerleau also point out that the conventional way of thinking about the distributional effects of tax cuts is to look directly at how big the tax cuts are across levels of income. But when tax cuts are part of what is feeding long-term higher budget deficits, then the question changes. If the higher deficits resulting from tax cuts are addressed by, say, cutting spending programs that mainly benefit those with lower incomes, then the distributional effect of the tax cuts, via a need to reduce budget deficits, will weigh more heavily on those with lower incomes. If a future plan to address budget deficits relies more heavily of taxing those with incomes, then the distributional effect of addressing the revenue losses from the tax cut would look different. Either way, the distributional effects of tax law changes made in 2025 will need to be considered against a background of how to address sustained high budget deficits.
In the individual income tax, one of the major changes of the 2017 tax law was a dramatic rise in the standard deduction. In the US income tax, all taxpayers compare the size of the standard deduction to a list of possible deductions for mortgage interest, charitable contributions, state and local taxes, and others. If the standard deduction is bigger, you apply that deduction to taxable income before calculating your tax bill. If the list of other deductions is bigger, you instead “itemize” these individual deduction.
But when the standard deduction got much larger, many fewer people found it worthwhile to itemize. As Bakija notes, the share of taxpayers itemizing deductions fell from 31 percent in 2017 to just 9 percent in 2021. As a result, many higher-middle-income people no longer faced marginal tax incentives to, say, give to charity, or take out a larger mortgage.
Should we go back to much smaller standard deductions? Should we provide alternative methods of providing incentives in the tax code for, say, charitable giving? Many people in higher-tax states, who used to be able to deduct their state and local taxes from their federal income subject to tax, would like to be able to do so again. These questions will be wide-open in 2025.
On the business taxation side, there are multiple issues. One challenge with business taxation is to avoid injuring incentives for investment, which helps to build long-term productivity growth, while still collecting revenue on profits. One tradeoff that arises here, discussed n the paper Chodorow-Reich, Zidar, and Zwick, is that lowering the corporate tax rate is, in effect, a reward for past investment that led to present profits. But one can imagine a combination of a somewhat higher corporate tax rate combined with generous reductions in taxes for current investment: that is, tax the firms that are not investing more than those firms that are investing.
At the international level, a dramatic change is that almost all US trading partners have been working for more than a decade now on a Base Erosion and Profit Shifting (BEPS) project, which seeks to make it more difficult for multinational companies to use accounting methods to shift their profits to lower-tax jurisdictions. As Clausing discusses, Pillar 2 of this agreement “includes a country-by-country minimum tax of 15 percent on multinational company income, regardless of where it is reported.”
This agreement, which the European Union and many other countries have signed late in 2022, includes what is called the Undertaxed Profits Rule. If a company from a country that has not signed on to the Pillar 2 rule, like the United States, is doing business in a country that has signed on to the rule (say, the EU, Japan, and others), then that other country can impose a “top-up tax” to collect higher corporate taxes from the US corporation. The US tax code as it applies to multinationals is not currently in compliance with Pillar 2.
The US tax code is also replete with highly targeted tax breaks for highly targeted social goals. The 2017 tax law included a tax break for certain kinds of “Opportunity Zones,” as discussed by Corinth and Feldman, with the goal of boosting investment in areas with lower income levels. Of course, a challenge for such tax breaks is that some investment in those areas would have happened without the tax break, so these investors are rewarded for what they would have done anyway. Other investment is not going to happen in that area, with or without the tax break. So the challenge is to figure out what marginal share of investment in the area increased because of the tax incentive. The authors find that in this case, most of the increased investment was in property, not in businesses that led to jobs for local residents.
If the 2017 tax provisions just expire at the end of 2025, and the tax code rebounds back to its 2016 version, the reaction to the changes will provide a fertile field for economic researchers. But from a public policy point of view, it seems a peculiar way to run a tax system.
When the government is paying for something, it should try to spend taxpayer money as effectively as possible. But when it comes to building and fixing roads, the US is almost surely not getting as much as it pays for. Zachary Liscow, Will Nober and Cailin Slattery offer some evidence in “Procurement and Infrastructure Costs,” presented at the 13th annual Municipal Finance Conference at the Brookings Institution (July 11, 2024).
They authors note: “The United States spends a large amount on infrastructure costs: state and local governments spent $266 billion on highways alone in 2022. The spending, on a per-project basis, is very high by international standards—over three times as high as other upper- and middle-income countries.”
Why might this be? The authors gather data across all 50 states by surveying those who work at state-level Departments of Transportation (DOTs) as well as the companies that bid to build roads. They create a detailed dataset of project-level costs. They discover some situations in which the public is paying a lot more for roads and highways than others. One factor is that managing the bidding process for infrastructure projects, and overseeing the project as it happens, will work better if the state DOT is of high quality and has sufficient in-house workers.
In the survey, there is broad agreement that state DOTs have become more understaffed and that reliance on consultants drives up costs. Survey respondents attribute a lack of details in project plans to both a lack of time or experience of DOT engineers and the use of consultants. When there is not enough specificity in the plans the risk to the contractor increases, increasing bids. Moreover, whenever the scope of a project changes this initiates a costly and time-consuming renegotiation process. Survey respondents agree that such changes are a major contributor to costs. We confirm that the state DOT workforce has been shrinking with administrative data on public sector employment.
Their survey data allows them to look at what states are widely considered to have higher- or lower-quality Departments of Transportation, and about the use of consultants. They find:
States that flag concerns about consultant costs have higher costs—a one standard deviation increase in reported consultant costs is associated with an almost 20% increase ($70,000) in cost per lane-mile. States where contractors and procurement officials expect more change orders have significantly higher costs: one additional change order correlates with $25,000 in additional cost per lane-mile at the mean. … More directly, we find that states with (perceived) higher quality DOT employees have lower costs. A state with “neither low nor high quality” employees has almost 30% higher costs per mile than one that rates the DOT employees as “moderately high quality”, all else equal. … A one standard deviation increase in DOT employment per capita is correlated with 16% lower costs.
They also find a problem with lack of bids for projects in many states: “A lack of competition for contracts is a oft-cited cost driver from procurement officials. … We show, with external data on the highway construction industry, that concentration in the industry seems to be rising. Most states have experienced a loss of construction firms, and an increase in size of the remaining firms, in the last 10 years.”
In the survey data, we find that states that do outreach to increase the bidder pool have significantly lower costs, highlighting both the importance of competition and the role the DOT can play in order to increase competition. A one standard deviation (12 percentage point) increase in bidder outreach is correlated with a 17.6% decrease in costs. At the mean, this translates to a decrease in costs of $65,000 per lane-mile and $1 million at the project level. We also find that limits on the amount of work that can be subcontracted is positively correlated with costs. Restrictions on subcontracting can decrease competition by limiting the set of potential prime contractors that can complete the project. Lastly, using our project-level cost data, we find that an additional bidder on a project is associated with 8.3% lower costs, or a savings of approximately $30,000 per lane-mile ($460,000 for the average project).
One of my personal frustrations with how legislation is often discussed arises when there is a heavy focus on the total amount spent, which is easy to measure, and much less focus on what is received for what is spent, which is harder to measure. But the intention (level of spending) is not the outcome (actual results). The estimates in this paper suggest that a number of states are overspending by maybe one-third, or perhaps even more, for the roads and highways they receive.
The idea behind Uber first arose, the story goes, on a snowy evening in Paris back around 2008, when Travis Kalanick and Garrett Camp found themselves stuck in Paris on a snowy evening, unable to find a taxi. They wondered “What if you could request a ride simply by tapping your phone?” They co-founded Uber based on an idea that has seemed so obvious for years, now, but it would have seemed a practical impossibility just 20 years ago, before the invention of the iPhone and the broader spread of smartphone technology.
Katharine G. Abraham, John C. Haltiwanger, Claire Y. Hou, Kristin Sandusky, and James Spletzer disinter some government statistics on the evolution of the ride-share market over time in “Driving the Gig Economy” (NBER Working Paper 32766, August 2024).
There’s something called the North American Industry Classification System (NAICS), which among other uses can provide employment totals by industry. The authors look as self-employment in the Taxi and Limousine Services industry, which is coded as NAICS 4853. Here’s the employment trend:
The authors focus on the factors and effects of this labor market entry, and in comparing patterns across cities. From the abstract:
New entrants were more likely to be young, female, White and U.S. born, and to combine earnings from ridesharing with wage and salary earnings. Displaced workers have found ridesharing to be a substantially more attractive fallback option than driving a taxi. Ridesharing also affected the incumbent taxi driver workforce. The exit rates of low-earning taxi drivers increased following the introduction of ridesharing in their city; exit rates of high-earning taxi drivers were little affected. In cities without regulations limiting the size of the taxi fleet, both groups of drivers experienced earnings losses following the introduction of ridesharing. These losses were ameliorated or absent in more heavily regulated markets.
Here, I want to focus instead on a different angle. Back around 2009, there apparently was an untapped reservoir of more than a million people who were willing to drive others for money. But many of these potential drivers put a high value on the flexibility of organizing their own time. There were millions of potential riders willing to hire those drivers. However, both sides needed reassurance. For the rider, would the fare and the route be locked in before the driver arrived? How long until the driver arrived and how long would the trip take? For the driver, was payment for the ride guaranteed via credit card before the ride started? Could the route and destination be known in advance. Was it safe for a driver to someone up–or at least safer to pick up someone with a ride-share account than someone waving at you from a street corner?
Now we tend to take the ride-sharing services for granted. For me, the ride-sharing companies offer an occasional but substantial gain in convenience; for the elderly, the disabled, the intoxicated, those who can’t afford a car, those who need to get to a medical appointment where it’s better not to drive, and others, ride-sharing can be a lifeline. I have family members who rely on ride-sharing completely. In my local area, when the Minneapolis City Council threated to regulate the ride-sharing services out of Minnesota, it was potentially a disaster for both existing users and drivers (for discussion, see here and here).
So here’s my question: If someone in the private sector like Kalanick and Camp had not started a ride-sharing company, would it have happened? After all, there was no technological barrier that prevented existing taxicab companies from offering a similar phone app. But in many cities, entry into taxicab markets back in the early 2000s was restricted, usually based on claims about safety and quality of service, but in practice also helping to hold down supply and increase incomes of existing cab drivers.
There seems to me a vanishingly small chance that the existing taxicab companies in the early 21st century would have disrupted their existing business model by expanding the number of drivers more than 10-fold, or offering the advance guarantee of routes and fares that is common to ride-sharing. The chances are even smaller (if “even smaller” is possible), that some government transit agency would have pioneered this decentralized approach.
Many of us now take the widespread availability of ride-sharing for granted. Indeed, many people take the restless innovation and energy of markets for granted in general. Ride-sharing in the real world has its warts and flaws and tradeoffs, as did the previous regime of taxicabs had, as do all real world institutions. But ride-sharing seems to me like an overall dramatic gain in welfare for the million-plus drivers who participate and the many millions of riders. And without the disruptive pressure of market forces, it would not have happened for years, or decades, or perhaps at all.
What do you call an company where you can deposit money, the institution lends out that money, you get paid a rate of return, and you can later withdraw some or all of the money? A “bank” is a reasonable answer, but it’s far from the only answer. There are a wide range of non-bank financial institutions–that is, non-bank institutions that share at least some properties of banks–and they are expanding in size.
Non-bank financial intermediaries (NBFIs) have surpassed banks as the largest global financial intermediaries. And yet, most NBFIs continue to be lightly regulated relative to banks for safety and soundness, whether in terms of capital and liquidity requirements, supervisory oversight, or resolution planning. Figure 1a shows, using data from the Financial Stability Board (FSB), that the global financial assets of NBFIs have grown faster than those of banks since 2012, to about $239 trillion and $183 trillion in 2021, respectively. In percentage terms, the share of the NBFI sector has grown from about 44% in 2012 to about 49% as of 2021, while banks’ share has shrunk from about 45% to about 38% over the same time period.
Figure 1b compares the assets of the NBFI and bank sectors in the United States alone. As in the global data, NBFIs in the United States have accumulated substantially more assets than banks over the period shown. However, the NBFI sector in the United States accounts for a much higher share of financial assets, which was over 60% in 2021. As an aside, this figure shows that NBFI assets fell substantially during the global financial crisis (GFC), as large volumes of special purpose vehicles were unwound, but that the NBFI sector as a whole subsequently resumed its steady growth.
The category of non-bank financial institutions include a range of entities: firms that issue asset-backed securities, broker-dealers, certain real estate investment trusts, government sponsored enterprises and agencies, insurance companies, money market funds, mutual funds, pension funds, and other financial businesses.
The challenge posed by the authors is that these kinds of companies are often interwoven with actual banks. However, banks are subject to reasonably tight regulation, especially in the aftermath of the global financial crisis of 2007-08. Thus, these companies are often designed and operate in a way where the nonbank firm takes risks that would not be available to banks, while still maintaining connections to banks that allow them to take advantage of what banks can do in terms of accessing funds or even government safety nets. The bank itself may look safe, but the interconnected web of non-bank and bank institutions may have substantially greater risks. As the authors write:
As a result, the components of [financial] intermediation activities that are under the heaviest burden of bank regulation tend to move from banks to NBFIs, while the components that benefit most from deposit franchises and access to explicit and implicit official backstops tend to remain at banks. It follows, then, that stressed NBFIs are bound to impose systemic externalities, whether by ceasing to function as significant intermediaries; by defaulting on obligations that destabilize some combination of banks, other intermediaries, or parts of the real economy; by drawing down on bank credit lines; or, by starting fire sales in the course of liquidating assets. Hence, while NBFIs in the current regulatory framework are de jure outside the official safety net, they are de facto inside.
As a concrete example, there’s something called the “Blackstone Private Credit Fund (BCRED), currently one of the largest private credit fund in the world with over $50 billion of assets.” Like the name says, it arranges loans, mostly to companies. However, it also puts together groups of lenders–many of whom turn out to be banks. Or as another example:
PacWest, a regional bank that had been losing deposits in the wake of the regional banking crisis of March 2023, sold $2.3 billion of loans backed by various accounts receivable to Ares Management, which is one of the largest private fund managers in the world. The purchase of these loans, however, was financed in part by Barclays. Hence, while the loans seemingly left the banking system through their sale from PacWest to Ares, some of the exposure to these same loans returned to the banking system through the financing of Ares’ purchase by Barclays.
There are many examples of these kinds of interactions between banks and non-banks, operating though credit lines, dealings in financial derivatives, commitments to be available for back-up funding if needed, and more. Exactly how the banking regulators should be taking non-bank financial institutions into account is very much up for discussion. It was a main issue left essentially unaddressed by the Dodd-Frank financial reform law (officially the Wall Street Reform and Consumer Protection Act of 2010), and it remains mostly unaddressed more than a decade later.
Does economic growth lead to greater inequality? Less inequality? About the same? It depends on other policies and underlying factors? An analysis from the staff of the IMF for provides an overview of these issues in “G-20 Background Note on the Impact of Growth on Inequality and Social Outcomes” (IMF, July 2024). (For those not familiar with the term “G-20,” it refers to a group of the 19 largest national economies in the world, with the European Union includes as a 20th member, and now the African Union as an additional member.)
I was intrigued to see that this note resurrects the “Kuznets curve” as a useful tool of analysis. For those not familiar with the term, the great economist Simon Kuznets proposed back in 1954 that inequality in a given country over time would follow an “inverted-U” pattern: that is, inequality would first rise as economic development started in certain locations and industries in a country, but then inequality would eventually decline as development became widespread. The theory held up reasonably well through the 1970s, but after that, inequality started rising in high-income countries around the world.
The G-20 report illustrates the pattern of recent decades this way. If you look at inequality within countries (blue line), it’s risen. If you look at inequality between countries, it’s fallen. Put these together, and overall global inequality has declined. For some intuition here, think about the rapid economic growth in China since the 1980s. This growth increased the within-country inequality in China, but decreased between-country inequality between China and high-income countries.
The report describes the current relationship between growth and inequality this way (citations and references to figures are omitted):
Although advanced economies, on average, have a lower level of inequality than emerging market and developing economies, any relationship between development—captured by output per capita—and inequality measured by the Palma ratio— the income share of the richest 10 percent of the population relative to the income share of the poorest 40 percent—is less clear over time, when the data is corrected for differences in average incomes between countries. For African Union members, a 10 percent increase in output per capita is associated with a 0.8 percent lower Palma ratio. For G20 advanced and emerging market economies, the relationship is instead positive—for example, for advanced economies, a 10 percent increase in output per capita is associated with a Palma Ratio that is about 3.5 percent higher. These findings are broadly consistent with a larger literature documenting a lack of any systematic correlation between growth and inequality change. In turn, this lack of a clear empirical relationship reflects the fact that growth and inequality can be driven by several distinct factors and, moreover, affect each other directly.
The report suggests that the Kuznets curve can serve as a useful framework for thinking about economic development and inequality. In the left-hand panel, the blue line is the classic Kuznets curve: that is, it first rises and then falls. The red dashed line for “policies” suggests that higher-income countries have ways of redistributing income that tend to reinforce the pattern of higher development leading to lower inequality (after taxes and transfers are taken into account).
However, the black line showing “other structural factors” suggests that there are factors, not accounted for in the Kuznets analysis back in the 1950s, which can tend to increase inequality as an economy grows. Two factors are mentioned. One is “skill-biased technical change,” which is the econo-speak way of saying that development in technology may in some cases tend to benefit those with certain skills. The other is a rise in globalization, which may in some cases tend to benefit those who are well-positioned to take advantage of it. Just to be clear, there is no implication here that all technology change or all global trade must necessarily increase inequality, only that the specific kinds of digital and information technology during the last few decades, and the form that globalization has taken over that time, have tended to have that effect in high-income countries.
The right-hand-side of the panel above first repeats the original Kuznets curve (solid blue line). The dashed blue line is a hypothetical one. If policy is directed at reducing inequality, then the Kuznets curve could bend more sharply, perhaps even in a way that would offset other structural economic factors pushing toward greater inequality.
The inequality-reducing policies here are not just about tax-and-transfer mechanisms, although that’s part of the picture. Another dimension is that in a world of skill-biased technical change, helping people get more skills through education and job training will spread the benefits of technical change more broadly. Similarly, helping workers get connected with the benefits of globalization would help limit an increase in inequality as well. Thinking about the structure of income support programs and labor markets, and whether they have incentives that may trap workers in lower-income jobs or unemployment, rather than helping them climb the ladder to better jobs, can also matter. A broad underlying insight back in the original Kuznets curve, before the days of computerization and globalization, is that inequality is reduced as economic growth spreads across places and industries.
I have been the Managing Editor of the Journal of Economic Perspectives since the first issue in Summer 1987. The JEP is published by the American Economic Association, which decided back in 2011–to my delight–that the journal would be freely available online, from the current issue all the way back to the first issue. You can download individual articles or entire issues, and it is available in various e-reader formats, too. Here, I’ll start with the Table of Contents for the just-released Summer 2024 issue, which in the Taylor household is known as issue #149. Below that are abstracts and direct links for all of the papers. I plan to blog more specifically about some of the papers in the few weeks, as well.
______________
Symposium: The Tax Cuts and Jobs Act of 2017
“Sweeping Changes and an Uncertain Legacy: The Tax Cuts and Jobs Act of 2017,” by William G. Gale, Jeffrey L. Hoopes, and Kyle Pomerleau
The Tax Cuts and Jobs Act (TCJA) of 2017 introduced sweeping changes to individual and corporate taxation. We summarize the major provisions, trace the origins of the Act, and compare it to previous tax changes. We also examine the effects on the government budget, economic activity, and distribution of resources. Based on evidence through 2019, we find that the TCJA clearly raised federal debt and increased after-tax incomes, disproportionately increasing incomes for the most affluent. Its effects on GDP and median wages seem modest at best, although clear counterfactuals are difficult to identify. The impact on investment is less certain, and research is only recently emerging that addresses this question. Empirical analysis of longer-term effects may prove difficult due to the disruptions created by the COVID-19 pandemic starting in 2020.
“The US Individual Income Tax: Recent Evolution and Evidence,” by Jon Bakija
This paper assesses the current state of the US federal individual taxation, and considers its recent evolution, with an emphasis on the changes to the individual income tax enacted in the 2017 Tax Cuts and Jobs Act (TCJA), and evidence on their impacts. How has the design of the tax changed, and how has this affected tax revenues, the distribution of tax burdens, marginal tax rates, and the breadth of the tax base? What were the rationales for the changes, and what does economics have to contribute to the debate over whether the changes were a good idea? What have we learned so far from empirical research on the impacts of recent changes in individual tax policy, including especially the changes enacted since 2017, and what does this imply for the optimal design of individual taxation?
“Lessons from the Biggest Business Tax Cut in US History,” by Gabriel Chodorow-Reich, Owen Zidar, and Eric Zwick
We assess the business provisions of the 2017 Tax Cuts and Jobs Act, the biggest corporate tax cut in US history. We draw five lessons. First, corporate tax revenue fell by 40 percent due to the lower rate and more generous expensing. Second, firms with larger declines in their effective tax wedge increased investment relatively more. In aggregate, we suggest a loose consensus from the literature that total tangible corporate investment increased by 11 percent. Third, the business tax provisions increased economic growth and wages by less than advertised by the Act’s proponents, with long-run GDP higher by less than 1 percent and labor income by less than $1,000 per employee. Fourth, provisions that increase foreign investment by US-based multinationals also boost their domestic operations. Fifth, some of the expired and expiring provisions, such as accelerated depreciation, generate more investment per dollar of tax revenue than others.
“US International Corporate Taxation after the Tax Cuts and Jobs Act,” by Kimberly A. Clausing
The root dilemma that informs the past, present, and future of US international taxation is the tension between two desiderata: protecting the corporate tax base from erosion and ensuring the competitiveness of US multinational firms in the world economy. This article begins by exploring that tension, discussing the evidence behind these competing policy goals. It then considers the international tax provisions of the Tax Cuts and Jobs Act of 2017. TCJA enacted transformative changes in US corporate tax policy, but it did not resolve long held policy concerns. While research on TCJA is in early stages, evidence indicates that TCJA substantially reduced corporate tax revenues, that TCJA’s international provisions (as a whole) raised less revenue than expected, that offshoring and profit shifting remain large policy concerns, that changes in US multinational company competitiveness were mixed, and that underlying trends in wages and investment did not change due to TCJA. While TCJA was unable to resolve the tension between competitiveness and tax base protection, the Pillar 2 international tax agreement shows more promise in that regard. As countries throughout the world implement a “country-by-country” minimum tax on multinational income of 15 percent, this has the potential to disrupt long-standing arguments about international corporate taxation.
“Are Opportunity Zones an Effective Place-Based Policy?,” by Kevin Corinth and Naomi Feldman
We evaluate the Opportunity Zones provision of the Tax Cuts and Jobs Act, focusing on its targeting and effects on investment and resident outcomes. The policy allowed substantial discretion for state governors to designate Opportunity Zones that were not necessarily the most distressed, though we find that in aggregate their ultimate selections were still somewhat well-targeted. However, we show that the policy is insufficient to encourage investment with a significantly below-market rate of return and provides the largest tax benefits to investment that would have occurred regardless of the policy. Consistent with these features of the policy’s design, a substantial amount of Opportunity Zone investment has been made, including in many lower-income areas. However, it appears that much of the investment would have occurred anyway, and the evidence to date mostly points to limited effects on resident wellbeing.
“Seeking the “Missing Women” of Economics with the Undergraduate Women in Economics Challenge,” by Tatyana Avilova and Claudia Goldin
Economics is among the most popular undergraduate majors, especially in top colleges and universities. However, even at the best research universities and liberal arts colleges men outnumber women by two to one, and overall there are about 2.5 males to every female economics major. We discuss why women major in economics less than men and describe a project to increase the number of female economics majors. The Undergraduate Women in Economics (UWE) Challenge was a randomized controlled trial, with 20 treatment and 68 control schools, that we ran for one year in AY 2015–16 to evaluate the impact of light-touch interventions to recruit and retain female economics majors. Treatment schools received funding, guidance, and access to networking with other treatment schools to implement programs such as providing better information to incoming students about the application of economics, exposing students to role models, providing mentoring, and updating course content and pedagogy. Using 2001–2021 data from the NCES-Integrated Postsecondary Education Data System (IPEDS) on graduating undergraduates (BAs), we find that UWE was effective in increasing the fraction of female BAs who majored in economics relative to men in liberal arts colleges. Large universities did not show an impact of the treatment, although those that implemented their own RCTs showed moderate success in encouraging more women to major in economics. We discuss what we believe worked in the UWE program and speculate on the reasons for differential treatment impact.
“Valuing Identity in the Classroom: What Economics Can Learn from Science, Technology, Engineering, and Mathematics Education,” by Sergio Barrera, Susan Sajadi, Marionette Holmes, and Sarah Jacobson
Economics faces stubborn underrepresentation of minoritized identity groups. Economics instructors also largely use antiquated instructional methods. We leverage the literature from the fields of science, technology, engineering, and mathematics education, which have rigorously studied instructional techniques and gathered evidence on a variety of methods that improve learning and reduce demographic gaps. We discuss four broad ideas: active and collaborative learning, role model interventions, modernized design of assessments and feedback, and culturally relevant, responsive, and sustaining pedagogy. We frame these approaches in the context of economics identity, share evidence regarding efficacy, and give examples of how the techniques have been and can be used in economics. In so doing, we provide a set of changes economics instructors can make, large and small, to improve their teaching for all students and to reduce demographic gaps in success and persistence in the field.
“Lessons for Expanding the Share of Disadvantaged Students in Economics from the AEA Summer Program at Michigan State University,” by Lisa D. Cook and Christine Moser
Since 1974, the American Economic Association Summer Training Program has provided training and mentoring to students from disadvantaged backgrounds in economics. The aim of the program is to encourage and prepare these students to apply to PhD programs in economics and ultimately to increase diversity in the profession. The program has been hosted by different universities over the years. This paper provides insights and lessons learned from the program’s tenure at Michigan State University from 2016 to 2020. In addition to discussing the structure and outcomes of the program, we provide advice to students, faculty, and potential hosts who may be interested in the AEA Summer Program or similar programs.
“What Went Wrong with Federal Student Loans?” by Adam Looney and Constantine Yannelis
At a time when the returns to college and graduate school are at historic highs, why do so many students struggle with their student loans? The increase in aggregate student debt and the struggles of today’s student loan borrowers can be traced to changes in federal policies intended to broaden access to federal aid and educational opportunities, and which increased enrollment and borrowing in higher-risk circumstances. Starting in the late 1990s, policymakers weakened regulations that had constrained institutions from enrolling aid-dependent students. This led to rising enrollment of relatively disadvantaged students, but primarily at poor-performing, low-value institutions whose students systematically failed to complete a degree, struggled to repay their loans, defaulted at high rates, and foundered in the job market. As these new borrowers experienced similarly poor outcomes, their loans piled up, loan performance deteriorated, and with it the finances of the federal program. The crisis illustrates the important role that educational institutions play in access to postsecondary education and student outcomes, and difficulty of using broadly-available loans to subsidize investments in education when there is so much heterogeneity in outcomes across institutions and programs and in the ability to repay of students.
“On the Economics of Extinction and Possible Mass Extinctions,” by M. Scott Taylor and Rolf Weder
Human beings’ domination of the planet has not been kind to many species. This is to be expected. Humans have radically altered natural landscapes, harvested heavily from the ocean, and altered the climate in an unprecedented way. Recent concerns over the extent and rate of biodiversity loss have led to renewed interest in extinction outcomes and speculation concerning humans’ potential role in any future mass extinction. In this paper, we discuss the economic causes of extinction in two high-profile cases—sharks and the North American Buffalo—and then extend our analysis to multiple species and discuss the possibility of mass extinction. Throughout, we present evidence drawn from authoritative data sources with a focus on shark populations to ground our analysis. Despite large gaps in our data, the available evidence reveals a worrisome trend: extinction risks are rising for many species and policymakers have been very slow to react.