How to Regulate a Railroad

The economics of railroads poses some problems for economists. What combination of competition and regulation will keep prices low but also encourage ongoing investment in track and rolling stock? Russell Pittman provides an overview of these issues and the various regulatory responses that have been tried to address the dangers of monopolistic pricing on one side and of competition leading to repeated bankruptcies on the other side in \”On the Economics of Restructuring World Railways, with a Focus on Russia\” (January 2021, US Department of Justice, Economic Analysis Group Working Paper 21-1). A version of this paper is also published in Man and the Economy (December 2020, 7:2, subscription required). The paper was originally delivered as a lecture at the Higher School of Economics in Moscow, which is the reason for some emphasis on Russia\’s experience, but the discussion ranges broadly across the topic and international experience. 

Pittman\’s quick overview of the problems of competition in railroads–which are in a broad sense similar to the problems of competition in other network industries including electricity, phone service, and airlines–may be useful in setting the stage. The standard views of economists and policy-makers have shifted over time. 

Back in the 1970s, a standard view was that competition didn\’t work in network industries, and so there needed to be government regulation of prices. Pittman writes:  

I learned this in the 70s, in my graduate course on industrial economics, and it was very clear. The infrastructure sectors – electricity, natural  gas, telecoms, railways – were “natural monopolies”. That is, it would be economically inefficient to have competition. For that reason, in order to protect the public from monopoly abuses, they were in most countries owned and operated by the government, or they were privately owned and regulated by state, local, or national government. The latter was certainly the regime in the U.S. and the UK.

They were regulated in a particular way. It was called rate of return regulation or cost of service regulation. They were regulated in the way that every year or every rating period they would total their costs, their labor costs, their material costs, plus the return on capital, provide those to the regulator and the regulator would say, “Okay, you’re allowed a rate of return on this capital stock and you’re allowed to pay your expenses. So here are the prices you can charge.” Everybody knew that that was not an ideal solution. Economists were very fond of saying that the “first best” solution – and that’s redundant, I admit – the first best solution was marginal cost pricing. But if you have marginal cost pricing in a network industry, you would have to have government subsidies for the network. And that was considered to be politically infeasible or not likely to happen.

The big difficulty with rate-of-return regulation or cost-of-service regulation is that it lacked any incentive for efficiency or innovation. After all, the regulated industry got its prespecified rate of return no matter what it did. Even worse, the rate of return was calculated based on the firm\’s spending–so if a firm spent more, its profits (in absolute levels) went up. So in the 1980s, there was a shift to what was called \”price cap\” or \”incentive\” regulation. Pittman again (citations omitted): 

[T]he new tools were called price caps. They came basically from some smart economists in the UK, led by Stephen Littlechild, among others, in many cases economists who were working as regulators. They understood as we all did that rate of return regulation provided poor incentives for efficient operation of the monopolist, but they said, “We can do better.” Littlechild and others came up with an idea called incentive regulation, or what became called “RPI minus X” regulation, and what also became called price caps. And the idea was that the prices of services from the natural monopolies would be allowed to increase every year by the overall rate of inflation RPI (for “Retail Prices Index”), minus a productivity adjustment. And this is something the regulator would impose and say, “Okay, telecom company, you’re allowed to raise rates every year by last year’s rate of inflation minus, say 2%, 3%.” Whatever we think should be the rate of increase in productivity in telecoms.

This was believed to provide public utility enterprises with powerful incentives to behave efficiently, because their prices were set independently of their costs, at least until maybe an adjustment every few years. And if those prices were set, then if the company is operated efficiently and could cut its costs, it would make profits, just as we hope all companies do. On the other hand, if the company is operated inefficiently and has high costs, there would be financial losses. Again, a powerful incentive to behave efficiently. This was considered to be a real revolution in regulation.

But price cap regulation turned out to have big problems, too.  One problem was that real-world regulators could not commit to the price cap. For example, say that a regulator agrees that the price of telecom services will fall 3% per year for the next five years. Sounds pretty good! But with this incentive, the regulated firm drives down costs by 6% per year, and after three or four years is making large profits. The regulators then face a lot of public pressure to break their promise of what the decline in prices will look like. On the other side, say that the regulator presets what prices will be in the next few years, and perhaps some exogenous shock causes the costs of the regulated firm to soar above that level. If the regulator doesn\’t allow the firm to charge higher-than-planned prices, the regulated firm may be driven into bankruptcy. Thus, the planned \”price cap\” often turned out to be just a basis for future negotiations–and its incentive properties were much more muted. 

The next stage of regulating industries focused instead on whether an industry like a railroad, phone company, or electricity company could instead be divided up into pieces: say, a baseline part of the industry that would remain regulated, and then another part where firms could compete against each other. One of the first big examples was the break-up of AT&T into a regulated local phone service with competition in the long-distance phone market. In airlines, the airports are run as regulated local monopolies, but the airlines can compete with each other. In electricity markets, for example, the idea was that the government would run the electricity lines as a regulated monopoly, but electricity producers could compete to provide the power. 

This approach has had its successes and failures. Deregulating long-distance US phone service has worked pretty well for consumers, although many of us would like to have more local competition to provide phone and internet service than we currently do. On the other side, some readers will remember when California tried to deregulate electricity along these lines, and big electricity providers figured out ways to game the system and drive up prices. 

In the case of railroads, two main approaches have emerged. With \”vertical separation,\” the government owns the track and the railroad carriers share the track and compete with each other. With \”horizontal separation,\” railroad companies (mostly) own and use their own track. Each approach has strengths and weaknesses, but at least in Pittman\’s telling, the second approach has turned out better. He writes: 

The European Commission has been very strong on pushing complete vertical separation: competition above the rail among independent train operating companies. This originally was urged for both freight and passenger rail. Now it remains an option for passenger rail to some degree, especially internationally, but is more emphasized for freight. ,,, 

On the other hand, in the Americas, North and South America and Central America, we have almost exclusively horizontal separation. Competition among vertically integrated railway companies that own their track in the U.S. and Canada, or have long-term franchise control of their track in Mexico and Brazil, and can for the most part insist that only their trains run on their tracks. For the most part, they have the complete right to deny other trains access. 

The big problem with the European-style approach is that it relies on the government to invest in track, including maintenance, upgrading, and expansion–and most European countries don\’t do nearly enough of it. As Pittman writes: 

And every year when the railway comes to the legislature and says, `We need this money, we’ve got to build some new track, renovate some old track,\’ the legislature says, `Yes, we understand that’s really important, but our pensioners need better medicine, or we need to give a tax break to some importers, or some other crucial need this year for funding. We’re sorry. We know it’s a problem, but the track will last another year.

The result has been bottlenecks, lack of expansion where it’s needed, slow and unreliable service in many countries in the EU. It’s a very big problem. Throughout the EU, I would say, especially in the East and the Central and Eastern European countries, where freight is very dependent on rail and there are a lot of rail bottlenecks and so a lot of the freight moves on the road.

The share of EU freight that travels by railroad has been falling over the decades. 

In the US, on the other hand, the share of freight carried by rail has been expanding. By the 1970s, the US has evolved an especially lousy system of rate-of-return regulation. In theory, the railroads could earn a profit. But in practice, they were required to keep operating lines that were losing money. There was also \”value of service\” pricing, where the railroads were supposed to charge more to carry more expensive goods–which meant those goods get moved to trucks and non-rail transit. The result was that by the late 1970s, the US railroad tracks and rolling stock were in lousy shape, rail’s share of intercity freight-ton miles in the US had crashed from about 70% in 1930 to about 35% by1980. 

In 1980, the US passed the  Staggers Act of 1980, which deregulated most  freight rates. Here\’s the story since then in a figure: rates have fallen, volumes carried have risen, productivity rose for a couple of decades before levelling off. US railroad spend lots of money on their track and rolling stock every year, while EU governments (with a few exceptions) don\’t. 

The big potential downside of the \”horizontal separation\” approach, which is also used in Canada and Mexico, is that it\’s easy to end up with situations where certain customers may be \”captives,\” in the sense that they rely on a single railroad–and thus they can be subject to monopoly pricing. Thus, the US system continues to protect these \”captive\” users with price regulation. 

Notice that if the European system of vertical separation did make big investments in the quality and size of its tracks, then competition would presumably work OK. As Pittman writes: 

So again, just to be fair, protecting captive shippers is the strength of the vertical separation model, because if anybody can run a train on the tracks, then if I’m a coal shipper and I don’t like the rate that Deutsche Bahn offers me, I can buy some locomotives and runmy own trains, or I can try to attract some other company, PKP or maybe RZhD in the future, to run trains on the common track and protect me.

Thus, the question of how to regulate a railroad seems to turn on this issue of how to get sufficient investment in track and capacity. If, and it\’s a big if, the government invests enough in this way, then above-the-track competition might work pretty well. Otherwise, it may be more useful to think about the geography that will let a model of horizontal separation work, so that there are multiple railroad linkages between city-pairs with a little backup price regulation for captive customers. 

Some Evidence on Those Who Hold Multiple Jobs

There\’s an old grim joke about those who hold multiple jobs

Comment:  \”Hey, did you hear the US economy created 100,000 new jobs last month?\” 

Response: \”Yeah, I\’m doing three of them.\”

Holding multiple jobs isn\’t always a bad thing: for example, a number of doctors technically have have one job at a private practice and another when they work at a hospital. But in the past, it has been hard to find detailed or consistent evidence on multiple job-holders.  For example, household survey data often asks about one\’s main job, not about all jobs. 

Keith A. Bailey and James R. Spletzer of the US Census Bureau Have cracked this problem by finding a way to use data from  the  Longitudinal Employer-Household Dynamics (LEHD). The Census Bureau has published a readable short overview of their work in \”Using Administrative Data, Census Bureau Can Now Track the Rise in Multiple Jobholders\” (February 3, 2021). The full working September 2020 working paper, \”A New Measure of Multiple Jobholding in the U.S. Economy,\” is at the Census Bureau website, too.

The LEHD is based on data that the government was already collecting for other purposes: for example, data on employment that was being collected for the unemployment insurance program, or data from the Quarterly Census of Employment and Wages, and other sources However, this data was often collected for administrative purposes (like running unemployment insurance), and so the task of the LEHD is to pull together data from a variety of sources into an anonymized dataset that can be used by researchers.  

Here are some basic findings from Bailey and Spletzer: About 7-8% of the workforce holds multiple jobs, with the share tending to rise when the economy is going well and fall during recessions.  The share of people holding multiple jobs seems to be edging up over time, but slowly.  

Some other patterns: 

Women hold multiple jobs at a higher rate than men and the rate has increased in the last 20 years. In the first quarter of 2018, 9.1% of women and 6.6% of men were working more than one job. …

[M]ost multiple jobs are clustered in a few industries. Here’s the percentage of second jobs by sector:

  • 16.8% in healthcare and social assistance.
  • 16.7% in accommodation and food services.
  • 14.5% in retail trade. …

Full-quarter jobs are long-lasting, stable jobs that exist in the previous quarter, the current quarter and the following quarter. Multiple jobholders earn less. … Why do persons with multiple jobs earn less, on average, from all jobs compared to persons with only one long-lasting, stable job? Our working paper shows that this earnings differential is due to age, gender and the industries that employ multiple jobholders. …

On average, earnings on all multiple jobs account for 28% of a multiple jobholder’s total earnings ($3,780 divided by $13,550). …

One of the most striking findings is that the share of total earnings that come from multiple jobholding is above 25% for every percentile.

A basic lesson here is that those with multiple jobs depend fairly heavily on those jobs, in the sense that a quarter or more of their income comes from the additional jobs. One suspects that many of these workers are putting in a lot of total hours, but with limited access to many of the common benefits of full-time jobs like paid vacation, employer-provided health insurance, and contributions to a retirement account.  

Winter 2021 Journal of Economic Perspectives Available Online

I am now in my 35th year as Managing Editor of the Journal of Economic Perspectives. The JEP is published by the American Economic Association, which decided about a decade ago–to my delight–that the journal would be freely available on-line, from the current issue all the way back to the first issue. You can download individual articles or the entire issue, and it is available in various e-reader formats, too. Here, I\’ll start with the Table of Contents for the just-released Winter 2021 issue, which in the Taylor household is known as issue #135. Below that are abstracts and direct links for all of the papers. I will probably blog more specifically about some of the papers in the next week or two, as well.

________________
Symposium on the Minimum Wage
\”The Elusive Employment Effect of the Minimum Wage,\” by Alan Manning
It is hard to find a negative effect on the employment effect of rises in the minimum wage: the elusive employment effect. It is much easier to find an impact on wages. This paper argues the elusive employment effect is unlikely to be solved by better data, methodology, or specification. The reasons for the elusive employment effect are the factors contributing to why the link between higher minimum wages and higher labor costs are weaker than one might think and because imperfect competition is pervasive in the labor market.
Full-Text Access | Supplementary Materials

\”City Limits: What Do Local-Area Minimum Wages Do?\” by Arindrajit Dube and Attila Lindner
Cities are increasingly setting their own minimum wages, and this trend has accelerated sharply in recent years. While in 2010 there were only three cities with their own minimum wages exceeding the state or federal standard, by 2020 there were 42. This new phenomenon raises the question: is it desirable to have city-level variation in minimum wage polices? We discuss the main trade-offs emerging from local variation in minimum wage polices and evaluate their empirical relevance. First, we document what type of cities raise minimum wages, and we discuss how these characteristics can potentially impact the effectiveness of city-level minimum wage policies. Second, we summarize the evolving evidence on city-level minimum wage changes and provide some new evidence of our own. Early evidence suggests that the impact of the policy on wages and employment to date has been broadly similar to the evidence on state- and federal-level minimum wage changes. Overall, city-level minimum wages seem to be able to tailor the policy to the local economic environment without imposing substantial distortions in allocation of labor and businesses across locations.
Full-Text Access | Supplementary Materials

\”How Do Firms Respond to Minimum Wage Increases? Understanding the Relevance of Non-employment Margins.\” by Jeffrey Clemens
This paper discusses non-employment margins through which firms may respond to minimum wage increases. Margins of interest include evasion, output prices, noncash compensation, job attributes including effort requirements, the firm’s mix of low- and high-skilled labor, and the firm’s mix of labor and capital. I discuss the basic theory behind each margin’s potential importance as well as findings from empirical research on their real-world relevance. Additionally, I present a set of pedagogical diagrams that show how supply and demand analyses of labor markets can be extended to bring additional nuances of real-world markets into the classroom.
Full-Text Access | Supplementary Materials

\”The Rise of American Minimum Wages, 1912–1968.\” by Price V. Fishback and Andrew J. Seltzer
This paper studies the judicial, political, and intellectual battles over minimum wages from the early state laws of the 1910s through the peak in the real federal minimum in 1968. Early laws were limited to women and children and were ruled unconstitutional by the Supreme Court between 1923 and 1937. The first federal law in 1938 initially exempted large portions of the workforce and set rates that became effectively obsolete during World War II. Later amendments raised minimum rates, but coverage did not expand until 1961. The states led the way in rates and coverage in the 1940s and 50s and again since the 1980s. The most contentious questions of today—the impact of minimum wages on earnings and employment—were already being addressed by economists in the 1910s. By about 1960, these discussions had surprisingly modern concerns about causality but did not have modern econometric tools or data.
Full-Text Access | Supplementary Materials

Symposium on Polarization in the Courts

\”Estimating Judicial Ideology,\” by Adam Bonica and Maya Sen
We review the substantial literature on estimating judicial ideology, from the US Supreme Court to the lowest state court. As a way to showcase the strengths and drawbacks of various measures, we further analyze trends in judicial polarization within the US federal courts. Our analysis shows substantial gaps in the ideology of judges appointed by Republican Presidents versus those appointed by Democrats. Similar to trends in Congressional polarization, the increasing gap is mostly driven by a rightward movement by judges appointed by Republicans. We conclude by noting important avenues for future research in the study of the ideology of judges.
Full-Text Access | Supplementary Materials

\”Can Structural Changes Fix the Supreme Court?\” by Daniel Hemel
Proposals for structural changes to the US Supreme Court have attracted attention in recent years amid a perceived “legitimacy crisis” afflicting the institution. This article first assesses whether the court is in fact facing a legitimacy crisis and then considers whether prominent reform proposals are likely to address the institutional weaknesses that reformers aim to resolve. The article concludes that key trends purportedly contributing to the crisis at the court are more ambiguous in their empirical foundations and normative implications than reformers often suggest. It also argues that prominent reform proposals—including term limits, age limits, lottery selection of justices, and explicit partisan balance requirements for court membership—are unlikely to resolve the institutional flaws that proponents perceive. It ends by suggesting a more modest (though novel) reform, which would allocate two lifetime appointments per presidential term and allow the size of the court to fluctuate within bounds.
Full-Text Access | Supplementary Materials

Symposium on Economics of Higher Education

\”Staffing the Higher Education Classroom,\” by David Figlio and Morton Schapiro
We discuss some centrally important decisions faced by colleges and universities regarding how to staff their undergraduate classrooms. We describe the multitasking problem faced by research-intensive institutions and explore the degree to which there may be a trade-off between research and teaching excellence using matched student-faculty-level data from Northwestern University. We present two alternative measures of teaching effectiveness—one capturing “deep learning” and one capturing “inspiration”—and demonstrate that neither is correlated with measures of research success. We discuss the move toward contingent faculty in US universities and show that on average, contingent faculty outperform tenure-line faculty in the introductory classroom, a pattern driven by the lowest-performing instructors according to our measures. We also present some of the ways in which instructor gender, race, and ethnicity might matter. Together, these pieces of evidence show that several institutional objectives associated with staffing undergraduate classrooms may be in tension with one another.
Full-Text Access | Supplementary Materials

\”The Globalization of Postsecondary Education: The Role of International Students in the US Higher Education System,\” by John Bound, Breno Braga, Gaurav Khanna and Sarah Turner
In the four decades since 1980, US colleges and universities have seen the number of students from abroad quadruple. This rise in enrollment and degree attainment affects the global supply of highly educated workers, the flow of talent to the US labor market, and the financing of US higher education. Yet, the impacts are far from uniform, with significant differences evident by level of study and type of institution. The determinants of foreign flows to US colleges and universities reflect both changes in student demand from abroad and the variation in market circumstances of colleges and universities, with visa policies serving a mediating role. The consequences of these market mechanisms impact global talent development, the resources of colleges and universities, and labor markets in the United States and countries sending students.
Full-Text Access | Supplementary Materials

\”Why Does the United States Have the Best Research Universities? Incentives, Resources, and Virtuous Circles,\” by W. Bentley MacLeod and Miguel Urquiola
Around 1875, the US had none of the world’s leading research universities; today, it accounts for the majority of the top-ranked. Many observers cite events surrounding World War II as the source of this reversal. We present evidence that US research universities had surpassed most countries’ decades before World War II. An explanation of their dominance must therefore begin earlier. The one we offer highlights reforms that began after the Civil War and enhanced the incentives and resources the system directs at research. Our story is not one of success by design, but rather of competition leading American colleges to begin to care about research. We draw on agency theory to argue that this led to increasing academic specialization, and in turn, to more precise measures of professors’ research output. Combined with sorting dynamics that concentrated talent and resources at some schools—and the emergence of tenure—this enhanced research performance.
Full-Text Access | Supplementary Materials

Articles

\”Taxing Our Wealth,\” by Florian Scheuer and Joel Slemrod
This paper evaluates proposals for an annual wealth tax. While a dozen OECD countries levied wealth taxes in the recent past, now only three retain them, with only Switzerland raising a comparable fraction of revenue as recent proposals for a US wealth tax. Studies of these taxes sometimes, but not always, find a substantial behavioral response, including of saving, portfolio change, avoidance, and evasion, and the impact depends crucially on design features, especially the broadness of the base and enforcement provisions. Because the US proposals are very different from any previous wealth tax, experience in other countries offers only broad lessons, but we can gain insights from closely related taxes, such as the property and the estate tax, and from optimal tax analysis of the role of wealth taxation.
Full-Text Access | Supplementary Materials

\”Melissa Dell: Winner of the 2020 Clark Medal,\” by Daron Acemoglu
The 2020 John Bates Clark Medal of the American Economic Association was awarded to Melissa Dell, Professor of Economics at Harvard University, for her path-breaking contributions in political economy, economic history, and economic development. This article summarizes Melissa Dell\’s work, places it in the context of the broader literature, and emphasizes how, with its data collection, careful empirical implementation, and audacious ambition, it has revolutionized work in political economy and economic history.
Full-Text Access | Supplementary Materials

\”Recommendations for Further Reading,\” by Timothy Taylor
Full-Text Access | Supplementary Materials

Why High-Income Economies Need to Fight COVID Everywhere

High-income countries are pushing and squabbling as they seek to vaccinate their own populations from COVID-19, while many lower-income countries have been pushed to the sidelines and forced to watch. But it\’s not clear yet clear (because not enough time has passed) to know for how long the vaccine provides protection, or for that matter, how long having had COVID provides protection against getting it again. Moreover, there is clearly some danger that at least some of the new strains of COVID emerging around the world might need different vaccines. 

In short, vaccinating the populations of high-income countries is a useful step. But if COVID remains prevalent and mutating into new strains in the rest of the world, we may be running on a treadmill from a public health point of view. Moreover, because of the interconnectedness of the world economy, there is a basic cost-benefit argument the high-income countries of the world to work together in a way that can make COVID vaccination widespread around the world. 

Cem Çakmaklı, Selva Demiralp, Ṣebnem Kalemli-Özcan, Sevcan Yeşiltaş and Muhammed A. Yıldırım make this case in \”The Economic Case for Global Vaccinations: An Epidemiological Model With International Production Networks\” (January 2021, available with free registration from the International Chamber of Commerce, and also available as NBER Working Paper #28395). 

The authors offer a useful reminder of interconnections in the world economy. Exports of goods and services are about 30% of world GDP.  Of that trade, about 60% is \”intermediate products,\” meaning products that are part of the production process for final goods, rather than being final goods themselves. The pandemic recession makes it harder for low- and middle-income countries either to purchase exports from  high-income countries or to produce intermediate products used by producers in high-income countries. To illustrate the issues here, consider two figures the authors produce to illustrate the interconnectedness of the global economy. 

This figure includes 65 countries, and then a ROW or \”rest of the world\” box to combine the others. The sizes of the boxes are relative to the size of the GDP of each area. The darker the shade of blue, the higher the ratio of export and imports to GDP. The thickness of the lines shows the importance of trade relative to the GDP for the two countries. Small connections between countries are not shown at all. Countries with a black line around their names have access to vaccines right now: of the 65 countries, 41 have access.

   

One can also look at the interconnectedness of global world production by industry. Here, the size of each node shows the size of the industry. The arrows show flows of goods from one industry to another. The darker the color, the more heavily that node depends on imports from other countries. 

The authors write: \”We show that even if AEs [advanced economies] eliminate the domestic costs of the pandemic thanks to the vaccines, the costs they bear due to their international linkages would be in the range of 0.2 trillion USD and 2.6 trillion USD, depending on the strength of trade and production linkages. Overall, AEs can bear up to 49 percent of the global costs in 2021. These numbers are far larger than the 27.2 billion USD cost of manufacturing and distributing vaccines globally.\”

The exact numbers from this study are of course imprecise, representing a range of different assumptions. But the basic lesson, which has applied at many points in the pandemic experience, is clear enough: what might seem like fairly large up-front costs actually are a pretty cheap price to pay for the benefits. It\’s not really a surprise that high-income countries would put a  higher priority on on vaccine supplies for their own citizens first. But if the high-income countries think that protecting their own citizens will insulate them from economic costs, or from future public health risks, they are missing some basic insight about what it means to live in a world where goods and people cross national boundaries all the time. 

Why Have Other High-Income Countries Dropped Wealth Taxes?

Advocates of a wealth tax for the United States need to confront a basic question: Why have  other high-income countries decided to drop their own wealth taxes? Sarah Perret explores this issue in \”Why did other wealth taxes fail and is this time different? (Wealth and Policy Commission, Working Paper #6, 2020). Perret writes;  

In 1990, there were twelve OECD countries, all in Europe, that levied individual net wealth taxes. However, most of them repealed their wealth taxes in the 1990s and 2000s, including Austria (in 1994), Denmark and Germany (in 1997), the Netherlands (in 20012 ), Finland, Iceland, and Luxembourg (in 2006) and Sweden (in 2007). Iceland, which had abolished its wealth tax in 2006, reintroduced it as a temporary ‘emergency’ measure between 2010 and 2014. Spain, which had introduced a 100% wealth tax reduction in 2008, reinstated the wealth tax in 2011. The reinstatement of the wealth tax was initially planned to be temporary but has been maintained since. France was the last country to repeal its wealth tax in 2018, replacing it with a tax on high-value immovable property. In 2020, Norway, Spain and Switzerland were the only OECD countries that still levied individual net wealth taxes.

The basic story, as Perret tells is, is that countries which gave up on the wealth tax tax largely decides that it was possible to tax wealth in other ways; and that defining how to tax \”wealth\” was running into enough problems of exemptions and exceptions that it wasn\’t worth the relatively small amount of revenue being raised. 

There are a number of taxes that work in a way similar to a wealth tax. A property tax (like the French tax on \”high-value immovable property\” mentioned above) is a tax on one kind of wealth. An estate tax or a gift tax is a tax on wealth. A capital gains tax can be, on average, similar to a wealth tax as well. Parret writes:  

In some ways, a wealth tax is similar to a tax on capital income. For instance, if an individual taxpayer has a total net wealth of EUR 10 million that earns a rate of return of 4%, the tax liability will be the same whether the government levies a tax of 30% on the capital income of EUR 400,000 or a wealth tax of 1.2% on the capital stock of EUR 10 million. Both will end up raising EUR 120,000. A capital income tax of 30% is thus equivalent to a wealth tax of 1.2% where the rate of return is 4%.

However, as Parret also notes, the wealth tax applies every year, whether you have a capital gain that year or not, which means that the incentive effects of such taxes may be quite different. 

 One reason why Switzerland has a relatively large wealth tax is that it has no taxes on capital gains, and most cantons of Switzerland have no inheritance or gift taxes, either. Most countries use these other policy tools rather than a wealth tax; the  Swiss are using a wealth tax rather than these other policy tools,

For most of the countries that gave up on their wealth taxes were collecting about 0.2-0,3% of GDP with those taxes–in countries where government tax revenues were often 40% of GDP or more.  Part of the reason for what may seem like a low level of revenue is that wealth taxes are typically built chock-full of exemptions. 

The first and most obvious exemption is that wealth taxes typically only apply to those above a certain threshold level of wealth. Parret explains (citations omitted): .

In some countries, the wealth tax is (or was) levied only on the very wealthy (e.g. France and Spain). Before repealing its wealth tax, France had the highest tax exemption threshold, taxing individuals and households with net wealth equal to or above EUR 1.3 million which meant that only around 360,000 taxpayers were subject to it in 2017. In other countries, wealth taxes have applied to a broader range of taxpayers. In Norway, the tax exemption threshold is approximately EUR 150,000 per person. In Switzerland, despite variations across cantons, tax exemption thresholds are comparatively low: in 2018, they ranged from USD 55,000 in the canton of Jura to USD 250,000 in the canton of Schwyz (for married couples) (Scheuer and Slemrod, 2020). Thus, Swiss wealth taxes affect a large portion of the middle class. 

Then think of all the different ways that one might hold wealth, and how those with a lot of resources might switch between them. For example, most countries exempted from wealth taxation the wealth held in the form of an expected pension or in the form of a retirement a county. Many countries exempt the value of the house you live in.  Many countries exempted the value of a family-run business. They exempted investments in certain collectibles like paintings and jewelry. Some exempted life insurance policies, donations to charitable foundations, and trusts set up for future generations.  There are typically exceptions for the wealth embodied in intellectual property, like ownership of a patent or copyright, although such rights can be bought and sold All of these factors can be further complicated if one takes into account wealth that is built up in other countries. 

There is solid evidence that people with a lot of wealth and a lot of lawyers can be very flexible in shifting their wealth between these and other different forms. Sure, it\’s possible to send government lawyers and accountants off to due battle on these issues, but remember that we\’re talking here about a tax with relatively small total revenues, and if most of the money collected from the tax goes to enforcing the tax, it\’s not worth it.  

The evidence that wealthy people move around their money to different types of ownership is strong, but the evidence on how a wealth tax might affect incentives for savings, investment, and entrepreneurship is not especially strong in either direction. To understand the incentives of a wealth tax return to the point made earlier that a wealth tax applies every year, whether wealth rises or not 

Imagine that an annual wealth tax is 2% of wealth (for those above some threshold amount of wealth). Thus, if the value of your wealth increased by 5% in given year, and the wealth tax takes 2%, then the after-tax gain on your wealth would be 3%. If the value of your wealth fell by 10% in a given year (say, the stock market declines), then the wealth tax tax still takes 2% and your after-tax loss would be 12%. 

In this situation, imagine a person with wealth is invested in a low-risk, low-return activity (like government bonds). Especially in the current setting of low interest rates, that person could end up paying all of the return to the wealth tax. Do we want to give the wealthy a strong disincentive to choose low-risk, low-return investments? 

Conversely, imaging a person investing in a high-risk entrepreneurial activity that might have either high or low returns. Say that the entrepreneurial attempt fails, and the investment loses half its value. The wealth tax still applies! Do we want to tax failed entrepreneurs? Or imagine that the entrepreneurial attempt succeeds, and the entrepreneur wants to use the additional wealth to build and expand a company. Do we want to impose an additional tax, above existing income and capital gains taxes, on successful entrepreneurs? 

Again, the evidence on how real behavior is affected by a wealth tax isn\’t clear. But it was clear that some of the real incentives of a wealth tax could be undesirable. The more a country tried to write up a set of rules to specify exactly what wealth should be taxed, or not, the more opportunities there were to sidestep such a tax with lawyer-driven financial planning. 

Parret\’s paper was written as background reports for a UK Wealth Tax Commission. The final report of the commission, A Wealth Tax for the UK (December 2020, written by Arun Advani, Emma Chamberlain, and Andy Summers) proposes a one-time wealth tax to pay for some of the cost of the pandemic recession. The advantage of a one-time wealth tax is that if it is both unexpected and one-time, it\’s difficult for those with high wealth to rearrange their finances and property-ownership in ways that would limit the bite of the tax. Of course, if the wealth tax is either expected or seems likely to be extended over time, this advantage diminishes. 

The report itself is fairly UK-centric, but those interested in wealth taxes in general will find a treasure trove of working papers at the website, both on specific topics related to the wealth tax and on country-by-country European experiences with such a tax.

Interview with Benjamin Friedman on Religion, Economic Growth, and Much Else

Tyler Cowen has one of his \”Conversations\” with \”Benjamin Friedman on the Origins of Economic Belief\” (January 27, 2021, audio, video, and transcript at the link). It starts from Friedman\’s recent book, Religion and the Rise of Capitalism, but then expands to touch on many other topics.  Here are some snippets: 

On American belief in religion and markets

[C]ompared to other high-income countries, Americans are more inclined to say they believe in religious doctrines, and the Americans are much more likely to participate in religious activities — going to church, for example.

I think this is not at all unrelated to the fact that Americans have a deeper and more active commitment to the ideas of market competition. It goes back to the question that I was raising a minute ago of where do our ideas of market competition come from in the first place? What I suggest in my book is that we got those ideas 200, 300 years ago from what were then new and hotly contended ideas about religion, theology within the English-speaking Protestant world, in which people like Adam Smith and David Hume lived. …

[T]here’s a part of the story that you’re missing, and that has to do with the coming together at mid–20th century in America, of religious conservatism and economic conservatism. I think the catalyst that brought them together was the existential fear of world communism. … Communism, at least as advocated at that time, had a unique feature of being simultaneously the existential enemy of lots of things that we hold dear. It was the enemy of Western-style political democracy, but it was also the enemy of Western-style market capitalism, and importantly for purposes of this line of argument, it was the enemy of Western-style religion.

I think the religious conservatives and the economic conservatives realized that they had an enemy in common, and they took the threat seriously, and this led them to come together. The person I think who played the greatest role in bringing them together was Bill Buckley. … [U]nder the leadership of Buckley and also ministers — I think of Billy Graham as being very active in this process, but others too — religious conservatism and economic conservatism came together at mid–20th century America, and I think they’ve been together ever since.

On how economic growth nourishes democracy

The hypothesis that I offered in that book very importantly was about the rate-of-growth effect. It’s about people realizing that they are living better than their parents lived. It’s about people realizing that the opportunities for their children are better than the opportunities that they had. It’s very much a rate-of-growth effect. The reason I say that’s important …  is twofold.

One, it was a warning against our being complacent. It’s a warning that no matter how rich our society is, if we get into a situation in which large numbers of people feel that they no longer have a sense of forward progress in their material lives, and they don’t see that turning around anytime soon, and they don’t have optimism either that their children will face a better economic future, that’s the circumstance under which people turn away from these small-l liberal, small-d democratic values, like tolerance and respect for diversity, generosity, openness of opportunity, even respect for democratic political institutions. … 

Then, at the same time, the second implication of the fact that it’s a rate-of-growth argument is some optimism that countries around the world, where income levels are far below ours, don’t have to wait until they reach our level of income before they can develop into liberal democracies.

Why the US government should borrow at longer debt maturities

Now, if I can put in a plug for a policy idea, this is why I have been recommending to anybody who will listen that this is a great time for the United States Treasury to lengthen out the maturity of the US government’s outstanding debt. The average maturity of the outstanding debt is around six years, and we happened to have super-low interest rates, it would be lovely to think that they’re going to be here forever.

Some economists think they will be. I’m more cautious. I would use the current market environment as a way to lock in those interest rates because we know that we’re going to have a high government debt level for a very long period of time and much better it not be a burden.

Is it time to redesign the Federal Reserve? 

I don’t think anybody today would design the Federal Reserve in the way in which it was designed a hundred years ago, with a board of governors, and these 12 regional reserve banks, and these presidents of the regional reserve banks who are not confirmed by the Senate playing the key role in establishing monetary policy.

That said, I would leave it alone because my sense is that, with the current politics of our country, I am sorry to say, I think if we open that box, we are much more likely to come out with something that’s worse than what we have. So, while I don’t know anybody who would defend, in the abstract, the structure we’ve got, I wouldn’t change it, at least not now.

The Importance of Unimportant Inventions

 Here\’s a paradox: If your firm produces a good or service that is a large part of the overall value of a final product, you may have a hard time keeping your price high or raising it. After all, precisely because you are a large part of total costs, any attempt to raise the price will be noticeable when passed on to customers, and those using your product have a strong incentive to take a tough line when negotiating terms. On the other side, if your firm produces a good or service that is necessary to a final product, but only a small share of total costs, you may be much more successful in keeping your price high or raising it further. When the price of your input climbs, it makes a relatively small difference to total costs. Aggressive competition from others to supplant your market niche may be less likely. Thus, it may be important to be unimportant, and fortunes can be made from unimportant inventions.  

Edward Tenner offers a meditation on this topic in \”The Importance of Being Unimportant\” (Milken Institute Review, First Quarter 2021). Tenner\’s title is apparently lifted from a section in Sir Hubert Henderson’s short textbook, previously unknown to me,  Supply and Demand (1922).
(Side note: John Maynard Keynes wrote a brief \”Introduction\” to Henderson\’s book. Here are the opening two paragraphs: 

The Theory of Economics does not furnish a body of settled conclusions immediately applicable to policy. It is a method rather than a doctrine, an apparatus of the mind, a technique of thinking, which helps its possessor to draw correct conclusions. It is not difficult in the sense in which mathematical and scientific techniques are difficult; but the fact that its modes of expression are much less precise than these, renders decidedly difficult the task of conveying it correctly to the minds of learners.

Before Adam Smith this apparatus of thought scarcely existed. Between his time and this it has been steadily enlarged and improved. Nor is there any branch of knowledge in the formation of which Englishmen can claim a more predominant part. It is not complete yet, but important improvements in its elements are becoming rare. The main task of the professional economist now consists, either in obtaining a wide knowledge of relevant facts and exercising skill in the application of economic principles to them, or in expounding the elements of his method in a lucid, accurate and illuminating way, so that, through his instruction, the number of those who can think for themselves may be increased.

I have often quoted the opening two lines to those who want to know if economics systematically leads to liberal or convervative policy conclusions. But I also love the comment from Keynes in 1922–that is, well before he wrote the General Theory–saying that in the subject of economics, \”important improvements in its elements are becoming rare.\”)

Tenner offers some vivid example of important unimportant inventions. For example, the valve is quite important part of a tire, but a relatively small part of total cost. Back in the 1890s, August Schrader and his son George not only built a better valve, but also standardized the size of valves so that valves could connect with a variety of air pumps. It\’s why I can pump up either car tires or my bicycle tires with either a hand-pump in my garage or a mechanical pump at a gas station. 

Or Jack and Belle Linsky, the founders of Swingline Corp., who invented the modern stapler. This was really two inventions: one was the way of gluing together a set of staples in a row so that they could easily be handled; the other was the stapler with a spring in the top that could open easily, accept the row of staples, and then align them for use. Before this invention, staples were individual loose objects: Tenner writes: \”Virtually no manufactured object costs less than a staple. Yet this humble device so enriched the Linskys that they were able to compete successfully with the Queen of England in auctions for decorative arts.\”
One more example originally due to Henderson is sewing thread–that is, not the threads used in making textiles, but the thread used in sewing textiles together for their eventual use. Tenner points to a prominent 1977 study which argues that the social rate of return to industry-level investment in thread innovation was the highest of the sectors studied. Patrick and James Clark started the Clark Thread Company in Paisley, Scotland in the 1750s. Patrick is credited with inventing the idea that thread could be sold on a spool. Today, the \”multinational Coats Group Ltd. remains the world’s largest manufacturer of industrial sewing threads.\” Historical fortunes from Friedrich Engels to the Cartier-Bresson family were built in part on profits from superior threads.
It\’s interesting to think about the modern equivalents of these important unimportant inventions. I remember some years back reading about a highly successful Silicon Valley company (whose name escapes me) which focused on making items like the little flashing red and yellow and green lights that everyone else used in their equipment. 
This insight applies at the level of job choices as well. Henderson\’s book back in 1922 pointed out that there were a number of different jobs in the process of making steal, including coal miners and smelting workers. There were many, many more coal miners, but precisely because their salaries were such a large part of costs, they had to push very hard for pay increases. Meanwhile, the smelting worker found it much easier to gain higher wages. Thus, the career advice is to look for a job niche where you (and those with similar skills) are essential to the final product, but not the main driver of final costs. 

Could Environmentalists Just Buy What They Want?

Under a \”marketable permits\” approach to controlling pollution, firms have permits to emit a certain amount of pollution. If a firm has extra permits, it can sell them; if it needs additional permits, it can buy them. The idea is that firms that have lower-cost methods of reducing pollution now have an incentive to do so, because they can sell the permits. 

But here\’s a question that\’s obvious to economists: Could an environmental group purchase a bunch of these pollution permits and not use them or sell them, just for the purposes of reducing pollution? 

Similarly, what if government auctioned off a bunch of oil leases. Could an environmentalist group purchase them, and not drill? Or what if government auctions off leases for grazing cattle or cutting timber on federal lands. Could an environmentalist group bid on the leases but then not graze cattle or cut timber? Or what if the government gives out a limited number of permits for hunting a certain animal, perhaps by lottery. Can a bunch of environmentalists flood into the lottery, get some share of the permits, and then not hunt?  

Shawn Regan tackles this question in \”Why Don’t Environmentalists Just Buy What They Want to Protect?\” and gives an answer in the subtitle \”Because it’s often against the rules\” (PERC Reports, Winter 2020, pp. 15-23). He writes: 

Technically, any U.S. citizen can bid for and hold leases for energy, grazing, or timber resources on public lands. But legal requirements often preclude environmentalists from participating in such markets. Federal and state rules typically require leaseholders to harvest, extract, or otherwise develop the resources, effectively shutting those who want to conserve resources out of the bidding process. Energy leasing regulations, for example, require leaseholders to extract the resources beneath their parcels. If they don’t, the leases could be canceled.

Historically, there is some rationale for these rules. A local economy may depend on cattle grazing or timber or oil and gas extraction. Environmentalists can buy out private owners of land: say, buying buying oil and gas leases or grazing rights from private owners. But environmentalists are not allowed to buy rights on public lands. By setting up a lease program, the government has agreed that these are suitable uses for the land.

In the case of hunting permits, the number given out is often calculated based on what wildlife biologists think is useful for the long-run success of the species. In this case, environmentalists who try to gather up such permits to prevent hunting are opposing \”the science.\” Timber rights are often given out based partly on the notion that letting dead wood pile up can create a landscape that will at some point suffer a devastating fire. Appropriate land management is a controversial topic. 

Environmentalists are increasingly trying to purchase government leases. Regan points out: 

But increasingly, environmentalists are testing the strategy of bidding for the rights to natural resources instead. In recent years, activists have attempted to acquire oil and gas rights in Utah, buy out ranchers’ public grazing permits in New Mexico, purchase hunting tags in Wyoming to stop grizzly bears from being killed, and bid against logging companies in Montana to keep trees standing.

Behind the scenes, grazing and timber rights have been declining over time. Regan reports: \”Livestock grazing on federal lands has declined more than 50 percent since the 1950s, in part due to environmental regulations that have weakened ranchers’ grazing privileges and pitted them against environmentalists in zero-sum legal fights. Likewise, timber harvests on federal lands have fallen nearly 80 percent since the 1980s.\” In some cases, it may be more productive for environmentalists to focus on whether it might be possible to focus on whether leases will be granted at all, rather than engaging in a legal battle to purchase them. 

It\’s worth noting that use-it-or-lose-it property rights are not unheard-of in other contexts. For example, water rights in much of the western United States have traditionally operated under a use-it-or-lose it legal regime. As economists have been quick to point out, this approach is often not good for incentives to conserve water, because a farmer who finds a way to use less water simply loses their legal right to that water. But it also means that if an environmentalist purchased a western farm and did not use the water rights, the farm would lose those rights. Use-it-or-lose-it property rights are common in the workplace, too, in companies where if you don\’t use your vacation or your sick leave or other benefits in a given year, they disappear and do not carry over into the following years. 

As Regan writes: \”It’s clear that many people value conservation and are willing to spend their own money to get it. The only question is whether those resources will be channeled through zero-sum political means or through positive-sum market mechanisms. In any case, if competing groups cannot directly acquire or trade rights through markets, whether for use or non-use purposes, the only option is to fight it out in the political and legal arenas.\”

The Reproducibility Challenge with Economic Data

One basic standard of economic research is surely that someone else should be able to reproduce what you have done. They don\’t have to agree with what you\’ve done. They may think your data is terrible and your methodology is worse. But as a minimal standard, they should be able to reproduce your result, so that the follow-up research can then be in a position to think about what might have been done differently or better.  This standard may seem obvious, but during the last 30 years or so, the methods for reproducibility have been transformed. 

Lars Vilhuber describes the shift in \”Reproducibility and Replicability in Economics\” in the Harvard Data Science Review (Fall 2020 issue, published December 21, 2020). Vilhuber is the Data Editor for the journals published by the American Economic Association (including the Journal of Economic Perspectives where I work as Managing Editor). Thus, he heads the group which oversees posting of data and code for new empirical results in AEA journals–including making sure that an outsider can use the data and code to reproduce the actual results reported in the paper. 

To jump to the bottom line, Vilhuber writes: \”Still, after 30 years, the results of reproducibility studies consistently show problems with about a third of reproduction attempts, and the increasing share of restricted-access data in economic research requires new tools, procedures, and methods to enable greater visibility into the reproducibility of such studies.\”

It\’s worth noting that reproducibility has come a long way. Back in the 1980s and earlier, researchers who had completed a published empirical research paper. but then moved on to other topics, often did not keep their data or code–or if they did keep them, the data and code were often full of idiosyncratic formats and labelling that worked fine for the original researcher (or perhaps for the research assistants of the original researcher who did a lot of the spadework), but could be impenetrable to a would-be outside replicator.  By contrast, a fair share of modern economics research can post the actual data, computer code, documentation for what was done, and so on. In this situation, you may disagree with how the researcher chose to proceed, but you can at  least reproduce their result easily. 
However, here I want to emphasize that a lot of the difficulties with reproducibility arise because finding the actual data used in an economic study is not as easy as one might think. Non-economists often think of economic data in terms of publicly available data series like GDP, inflation, or unemployment, which anyone can look up on the internet. But economic research often goes well beyond these extremely well-known data sources. One big shift has been to the use of \”administrative\” data, which is a catch-all term to describe data that was not collected for research purposes, but instead developed for administrative reasons. Examples would include tax data from the Internal Revenue Service, data on earnings from the Social Security Administration, data on details of health care spending from Medicare and Medicaid, and education data on teachers and students collected by school districts. There is also private-sector administrative data about issues from financial markets to cell-phone data, credit card data, and \”scanner\” data generated by cash registers when you, say, buy groceries. 
Vilhuber writes: \”In 1960, 76% of empirical AER [American Economic Review- articles used public-use data. By 2010, 60% used administrative data, presumably none of which is public use …\”
You can\’t just write to, say, the Internal Revenue Service and ask to see all the detailed data from tax returns. Nor can you directly access detailed data from Social Security or Medicare or a school district, or from what people reported in the US Census. There are obvious privacy concerns here. 

Thus, one change in recent years is what are called \”restricted access data environments,\” where accredited researchers can get access to detailed data, but in ways that protect individual privacy. For example, there are now 30 Federal Statistical Data Research Centers around the country, mostly located close to big universities.  Vilhuber writes (citations omitted): 

It is worth pointing out the increase in the past 2 decades of formal restricted-access data environments (RADEs), sponsored or funded by national statistical offices and funding agencies. RADE networks, with formal, nondiscriminatory, albeit often lengthy access protocols, have been set up in the United States (FSRDC), France, and many other countries. Often, these networks have been initiated by economists, though widespread use is made by other social scientists and in some cases health researchers. RADE are less common for private-sector data, although several initiatives have made progress and are frequently used by researchers: Institute for Research on Innovation and Science, Health Care Cost Institute , Private Capital Research Institute (PCRI). When such nondiscriminatory agreements are implemented at scale, a significant number of researchers can obtain access to these data under strict security protocols. As of 2018, the FSRDC hosted more than 750 researchers on over 300 projects, of which 140 had started within the last 12 months. The IAB FDZ [a source of German employment data] lists over 500 projects active as of September 2019, most with multiple authors. In these and other networks, many researchers share access to the same data sets, and could potentially conduct reproducibility studies. Typically, access is via a network of secure rooms (FSRDC, Canada, Germany), but in some cases, remote access via ‘thin clients’ (France) or virtual desktop infrastructure (some Scandinavian countries, data from the Economic Research Service of the United States Department of Agriculture [USDA] via NORC) is allowed.

A common situation is that this kind of data often cannot be put into the public domain; instead, you would need to apply and to gain access to the \”restricted access data environment,\” and access the data in that way. 

Another issue is that in some of these data sources, researchers are not given access to all of the data; instead, to protect privacy, they are given an extract of the overall data. As a result, two researchers who go to the data center and make the same data request will not get the same data. The overall patterns in the data should be pretty close, if random samples are used, but they won\’t be the same. Vilhuber writes: 

Some widely used data sets are accessible by any researcher, but the license they are subject to prevents their redistribution and thus their inclusion as part of data deposits. This includes nonconfidential data sets from the Health and Retirement Study (HRS) and the Panel Study of Income Dynamics (PSID) at the University of Michigan and data provided by IPUMS at the Minnesota Population Center. All of these data can be freely downloaded, subject to agreement to a license. IPUMS lists 963 publications for 2015 alone that use one of its data sources. The typical user will create a custom extract of the PSID and IPUMS databases through a data query system, not download specific data sets. Thus, each extract is essentially unique. Yet that same extract cannot be redistributed, or deposited at a journal or any other archive.<span class="footnote" data-node-type="footnote" data-value="

For IPUMS, extracts from population samples (e.g., the 5% sample of the U.S. population census) rather than full population censuses (the 100% file) can be provided to journals for the purpose of replication.

\” date-structured-value=\”\” id=\”iurdxu11kb\”>undefined In 2018, the PSID, in collaboration with ICPSR, has addressed this issue with the PSID Repository, which allows researchers to deposit their custom extracts in full compliance with the PSID Conditions of Use.

Yet another issue arises with data from commercial sources, which often require a fee to access: 

Commercial (‘proprietary’) data is typically subject to licenses that also prohibit redistribution. Larger companies may have data provision as part of their service, but providing it to academic researchers is only a small part of the overall business. Dun and Bradstreet’s Compustat, Bureau van Dijk’s Orbis, Nielsen Scanner data via the Kilts Center at Chicago Booth (Kilts Center, n.d.), or Twitter data are all used frequently by economists and other social scientists. But providing robust and curated archives of data as used by clients over 5 or more years is typically not part of their service.

Research using social media data can pose particular problems for someone who wants to reproduce the study using the same data:

Difficulties when citing data are compounded when the data is either changing, or is a potentially ill-defined subset of a larger static or dynamic databases. ‘Big data’ have always posed challenges—see the earlier discussion of the 1950s–1960s demand for access to government databases. By nature, they most often fall into the ‘proprietary’ and ‘commercial’ category, with the problems that entails for reproducibility. However, beyond the (solvable) problem of providing replicators with authorized access and enough computing resources to replicate original research, even defining or acquiring the original data inputs may be hard. Big data may be ephemerous by nature, too big to retain for significant duration (sometimes referred to as ‘velocity’), temporally or cross-sectionally inconsistent (variable specifications change, sometimes referred to as ‘variety’). This may make computational reproducibility impossible. … For instance, a study that uses data from an ephemerous social media platform where posts last no more than 24 hours (‘velocity’) and where the data schema may mutate over time (‘variety’) may not be computationally reproducible, because the posts will have been deleted (and terms of use may prohibit redistribution of any scraped data). But the same data collection (scraping or data extraction) can be repeated, albeit with some complexity in reprogramming to address the variety problem, leading to a replication study.

Finally, there a problem of \”cleaning\” data. \”Raw\” data always has errors. Sometimes data isn\’t filled in. Other times it may show a nonsensical finding, like someone having a negative level of income in a year, or an entry where it looks as if several zeros were added to a number by accident. Thus, the data needs to be \”cleaned\” before it\’s used. For well-known data, there are archives of documentation for how data has been cleaned, and why. But for lots of data, the documentation for how it has been cleaned isn\’t available.  Vilhuber writes: 

While in theory, researchers are able to at least informally describe the data extraction and cleaning processes when run on third-party–controlled systems that are typical of big data, in practice, this does not happen. An informal analysis of various Twitter-related economics articles shows very little or no description of the data extraction and cleaning process. The problem, however, is not unique to big-data articles—most articles provide little if any input data cleaning code in reproducibility archives, in large part because provision of the code that manipulates the input data is only suggested, but not required by most data deposit policies.

As a final thought, I\’ll point out that academic researchers have mixed incentives when it comes to data. They always want access to new data, because new data is often a reliable pathway to published papers that can build a reputation and a paycheck. They often want access to the data used by rival researchers, to understand and to critique their results. But making access available to details of their own data doesn\’t necessarily help them much. 

For example, imagine that you write a prominent academic paper, and all the data is widely available. The chances are good that for years to come, your paper will become target practice for economics students and younger faculty members, who want to critique you and to justify all the choices you made in the research. However, you may have a reasonable dislike of spending large chunks of the rest of your career going over the same ground, again and again.

From this standpoint, it\’s perhaps not surprising that while many leading journals of economics now do require that authors publish their computer code and as much of their data as they are allowed to do, the number of papers that get \”exceptions\” for publishing their data is rising. Moreover, the requirement that an author supply data and computer code is not part of what is required for submitting a paper or making a decision about publishing the paper (although other professors refereeing the paper can make a request to see the data and code, if they wish). 

It\’s also maybe not a surprise that a study of one prominent journal looked at papers published from 2009 to 2013 and found that of the papers where data was not posted online, only about one-third of the papers had data where it was reasonably straightforward for others to obtain the data. 

And it\’s also maybe not a surprise that more and more papers are published with data that you have to be an official researcher to access, through a restricted access data center, which presents some hurdles to those not well-connected in the research community. 

Access to data and computer code behind economic research has improved, and improved a lot, since the pre-internet age. But in many cases, it\’s still far from easy. 

Interview with Valerie Ramey: Fiscal Policy, Time Use, and More

David A. Price serves as interlocutor in an \”Interview with Valerie Ramey On fiscal stimulus, technological lull, and the rug-rat race\” (Econ Focus: Federal Reserve Bank of Richmond, Fall 2020, pp. 20-24). Here are a couple of excepts: 

On looking at news records to measure historical effects of fiscal stimulus

I started looking at news records when I realized that changes in government spending are often announced at least several quarters before the government spending actually occurs. That\’s really important, because the empirical techniques that researchers were using previously to measure the effect of government spending implicitly assumed that any change in government spending was essentially unanticipated. But our models tell us that individuals and firms are forward-looking and therefore will react as soon as the news arrives about a future event. This means that the previously used techniques had the timing wrong and therefore couldn\’t accurately estimate the effects of government spending. …

One historical case was the start of World War II, when Germany invaded Poland in September 1939. Events happened over the subsequent months that kept changing expectations. Even though the United States was supposedly not going to enter the war, many businesses knew that they would be increasing their production of defense goods and people knew the military draft was coming because FDR was making many speeches on the importance of building up defenses. To assess the effects of spending, it was important to figure out the exact timing of when the news arrived about future increases in government spending rather than when the spending actually occurred.

You may wonder whether individuals and businesses really do change their behavior based on anticipations of future changes. A perfect example is the start of the Korean War in June 1950, when North Korea invaded South Korea. Consumers, who remembered the rationing of consumer durable goods during World War II, and firms, which remembered the price controls, reacted quickly: Consumers immediately went out and bought consumer durables like refrigerators and washing machines, and firms immediately started raising their prices. All of this happened before there were any changes in government spending or any policies on rationing or price controls.

On reasons for rising time spent on child care since the 1990s

[O]ne of the puzzling things I saw was that the amount of time that people, particularly women, spent on domestic work was going down in almost every category — cleaning their houses, cooking, and chores — except for child care. Time spent on child care had been falling in the 1970s and 1980s but then started rising in the 1990s. Trends in time spent on child care were a puzzle because they looked so different from other home production categories. … 

Since the 1980s, the propensity to go to college has risen, in large part because of the rise in wages of a college graduate relative to a high school graduate. However, the numbers of students applying to college didn\’t increase much from 1980 to the early 1990s because there had been a baby bust 18 years earlier. In the second half of the 1990s, the number of students applying to college rose significantly because of a previous baby boomlet. Thus, the demand for college slots rose in the mid-1990s.

The result was what John Bound and others have called cohort crowding. They found that the better the college, the less elastic the supply of slots to the size of the cohort trying to be admitted. For instance, Harvard and Yale barely change how many students they admit to their entering class. The flagship public universities are a little bit more elastic, but they\’re not elastic enough to keep up with the demand to get into those top colleges. …

Our hypothesis was that during earlier times when you didn\’t have this cohort crowding, most college-educated parents felt as though their kids could get into a good college. So they were pretty relaxed about it. But then as you started having the cohort crowding, the parents became more competitive and put more effort into polishing their children\’s resumes because they realized it was harder and harder to get into the top colleges.