With the Rise of Index Funds, Who Watches the Companies?

A standard argument for the social usefulness of the stock market is that shareholders have an incentive to monitor and to scrutinize the companies in which they have invested. When this incentive is combined with requirements for firms to disclose information, to be audited, and to answer questions from shareholders–along with the ultimate power of shareholders to replace top executives–publicly-owned corporations must live an examined life. One can have honest arguments over how well this shareholder monitoring works. But the rise of index funds is a direct challenge to these arguments.

Index funds, for the uninitiated, seek only to mirror the performance of the overall stock market (as measured, for example, by an index like the S&P 500 or the Russell 3000). Three big companies dominate the market for index funds: Vanguard, Black Rock, and State Street Global Advisors (commonly called SSGA). An index fund is a passive investor, and it can be set up as an automated investor. Indeed, one main reason why an index fund can charge such low fees is that it does very little monitoring of companies, because it doesn\’t pick and choose between companies.

The combination of very low fees and a return which matches the overall stock market can be an excellent deal for everyday investor deciding on how to invest money in their retirement account, like me. Indeed, the legendary investor Warren Buffett has recommended low-cost index funds to everyday investors, and in his will instructs that his legacy to his wife be managed as a low-cost index fund. John Bogle, who created the first prominent index fund at Vanguard, became a folk hero to many investors.  But while investors have been moving in the direction of index funds, the issue of what happens to a stock market with substantially less monitoring has received less attention. 

Lucian Bebchuk and Scott Hirsi have been writing a series of essays on this topic. In \”The Specter of the Giant Three,\” published earlier this year in the  Boston University Law Review (2019, 99:3, pp. 721-742), they lay out some facts and estimates about the growth of the Big Three. They write:

This Article analyzes the steady rise of the “Big Three” index fund managers—Blackrock, Vanguard, and State Street Global Advisors (“SSGA”). Based our analysis of recent trends, we conclude that the Big Three will likely continue to grow into a “Giant Three,” and that the Giant Three will likely come to dominate voting in public companies. …

  • Over the last decade, more than 80% of all assets flowing into investment funds has gone to the Big Three, and the proportion of total funds flowing to the Big Three has been rising through the second half of the decade;
  • The average combined stake in S&P 500 companies held by the Big Three essentially quadrupled over the past two decades, from 5.2% in 1998 to 20.5% in 2017;
  • Over the past decade, the number of positions in S&P 500 companies in which the Big Three hold 5% or more of the company’s equity has increased more than five-fold, with each of BlackRock and Vanguard now holding positions of 5% or more of the shares of almost all of the companies in the S&P 500; … 
  • Because the Big Three generally vote all of their shares, whereas not all of the non-Big Three shareholders of those companies do so, shares held by the Big Three represent an average of about 25% of the shares voted in director elections at S&P 500 companies in 2018. … 

Assuming that past trends continue, we estimate that the share of votes that the Big Three would cast at S&P 500 companies could well reach about 34% of votes in the next decade, and about 41% of votes in two decades. Thus, if recent trends continue, the Big Three could be expected to become the “Giant Three.” In this Giant Three scenario, three investment managers would largely dominate shareholder voting in practically all significant U.S. companies that do not have a controlling shareholder.

What are the possible risks? Bebchuk and Hirst address the question in \”Index Funds and the Future of Corporate Governance: Theory, Evidence, and Policy.\” The paper is forthcoming in the Columbia Law Review in December 2019, but what looks like a final version is also available as a working paper from the European Corporate Governance Institute. As they note early in this essay, the analytical framework for this paper is based on an essay by Bebchuk, Hirst, and Alma Cohen, \”The Agency Problems of Institutional Investors,\” which appeared in the Summer 2017 issue of the Journal of Economic Perspectives.

Here\’s a taste of the Bebchuk and Hirst argument:

We show that the Big Three devote an economically negligible fraction of their fee income to stewardship and that their stewardship staffing levels enable only limited and cursory stewardship for the vast majority of their portfolio companies. … Our analysis of the voting guidelines and stewardship reports of the Big Three indicates that their stewardship focuses on governance structures and processes and pays limited attention to financial underperformance. … 

[I]ndex fund investors could benefit if index fund managers communicated with the boards of underperforming companies about replacing or adding certain directors. However, our examination of director nominations and Schedule 13D filings over the past decade indicates that the Big Three have refrained from such communications. … 

Index fund investors would benefit from involvement by index fund managers in corporate governance reforms—such as supporting desirable proposed changes and opposing undesirable changes—that could materially affect the value of many portfolio companies. …  We find that the Big Three have contributed very few such comments and no amicus briefs during the periods we examine, and were much less involved in such reforms than asset owners with much smaller portfolios. … 

Legal rules encourage institutional investors with “skin in the game” to take on lead plaintiff positions in securities class actions; this serves the interests of their investors by monitoring class counsel, settlement agreements and recoveries, and the terms of governance reforms incorporated in such settlements. … Although the Big Three’s investors often have significant skin in the game, we find that the Big Three refrained from taking on lead plaintiff positions in any of these cases.

Interestingly, Bebchuk and Hirst do not find evidence in support of one concern sometimes raised: that because the big index funds want  high returns for investors, they will tend to favor less competition between firms as a way of generating higher profits for firms. The authors clearly believe that a reduction in firm monitoring by the growing Big Three index funds poses real problems, but this doesn\’t seem to be one of them.

In response to the growth of the Big Three index funds, Bebchuk and Hirst have some concrete suggestions. Various changes in accounting and other rules might encourage the index firms to spend more on interacting with companies in their portfolio. There are concerns that some index companies may have a conflict of interest: if an index company also runs the retirement fund for a given company, it may become reluctant to chastise management of that company. Such conflicts could be prohibited. There is also an issue that under Section 13(d) of the Securities and Exchange Act, an investor who holds more than 5% of a company\’s stock with the purpose of influencing control of the company faces a number of additional disclosure rules, and so the Big Three index funds demonstrate their lack of interest in influencing control by remaining largely uninvolved with specific companies.

These kinds of issue are likely to heat up in the next decade or two, as the share of stock markets held by the Big Three index funds continues to rise. In my own mind, I sometimes separate those corporate governance issues related to the strategy and leadership of a single company from those more general issues that may affect a broad range of companies. The Big Three index finds are in an interesting and potentially powerful position to take stances on that second group of more general issues.

Prescription Drug Prices are Falling (says the Consumer Price Index)

\”[W]e conclude that the Bureau of Labor Statistics’ (BLS) CPI Prescription Drug Index (CPI-Rx) is the best available summary measure of the price changes of prescription drugs. According to this measure, not only are drug prices increasing more slowly than general price inflation; in the most recent period, drug prices have been decreasing. From the peak in June 2018 through August 2019, the CPI-Rx has declined by 1.9 percent. Figure 1 plots the year-over-year percentage change in the CPI-Rx. Through August 2019, the year-over-year change in the index has now been negative for 8 of the previous 9 months.\”

So reports the White House Council of Economic Advisers in \”Measuring Prescription Drug Prices: A Primer on the CPI Prescription Drug Index\” (October 2019). The report offers a useful explanation of why it\’s hard to measure an overall change in prescription drug prices, the key choices made by the Bureau of Labor Statistics in doing so, and the basis for news stories which claim that prescription drug prices have been rising quickly. 
As a starter, here\’s the Consumer Price Index for Prescription Drugs as calculated by the US Bureau of Labor Statistics:

A price index is of course an average over all prices. In addition, it\’s a weighted average, where those items on which many people spend a lot get more weight than those items where only a few people spend.

Thus, if the price of an anti-cancer prescription drug used by a few thousand people rises by 100%, but but at the same time generic substitutes for some other prescription drug used by 20 million people become available at a fraction of the brand-name price, the overall price index for prescription drugs is likely to fall. The CEA report explains how the entry of generic equivalents into the prescription drug market are treated in this way:

The FDA approves generics if, among other things, the active ingredient is the same as the branded drug and the generic drug is bioequivalent to the brand name drug. As a result, generic drugs are considered substitutable (in fact, almost a perfect substitute) for the branded version but typically have a lower price, and many consumers switch from the branded version to the generic version shortly after the generic version becomes available. This switch is a price decline (lower price for an identical product) that is not captured by tracking the branded drug or the generic drug’s price over time. The CPI-Rx accounts for generic substitution by tracking the initial entry of a generic drug. After roughly 6 months after patent expiration (enough time for the generic to establish market share), the branded drug is randomly replaced with the generic drug, with a probability equal to the generic’s market share, and the price difference is recorded as a price decrease.

In addition, prescription drugs often have both a \”list price\” and an actual \”transaction price,\” which results from a negotiation between drug manufacturers, health insurance companies, pharmaceutical benefit managers, as well as in some cases direct rebates to consumers. The BLS price index is based on the transaction price, not the list price. As the CEA report notes: \”Express Scripts, one of the largest pharmacy benefit managers, reported that even though list prices increased in 2018, the prices paid by their clients fell. Some drug companies have themselves warned investors that increased discounts and rebates would offset any list price increases and that net prices would either be flat or fall in 2019 …\”

In addition, there is reason to believe that the prescription drug price index calculated by the BLS overstates the actual rise in prices because of a standard problem sometimes called the \”quality\” or \”new goods\” bias. Say that an old drug is replaced by a new drug, which works better and has fewer side effects, but sells for the same price. In this hypothetical, you are getting more for your money with the new drug; in fact, even if you paid a little more for the new drug, you might be better off. But while a perfect price index would presumably hold constant the quality and variety of drugs available, the practical reals-world price index doesn\’t do this, and for that reason will tend to estimate a higher rise in prices.

So why do so many news stories give the impression that prescription drug prices overall are rising quickly? Each news story has its own hook, of course, and the CEA report runs through a number of examples. In some cases, the news story might be focusing on a particular drug or small group of drugs. In other cases, the news story might focus only the average price increase for prescription drugs where the price rose, leaving out others. In other cases, the news story might count up how many drugs has price increase compared to how many did not, leaving out the issue of how much each of  the actual drugs is used.

Of course, the CPI measure of changes in prescription drug prices has practical problems, as do all price indexes. It\’s based on a sample of prescription drugs, not all of them, and less-prescribed drugs are more likely to be left out. It is based retail prescription drugs, and so it doesn\’t include prices for hospital- and doctor-administered drugs. Figuring out transaction prices and gathering information on rebates to consumers is imperfect. If you personally need a certain drug, where they aren\’t good substitutes, and the price of that drug goes up, it\’s perhaps not very comforting to read about what is happening with an overall price index of drugs that includes all the ones you are not taking.

But if our society is going to address issues like the out-of-pocket cost of many prescription drugs, it\’s important to see the overall issue clearly. And the overall evidence is that the price index for prescription drugs has fallen in the last year or so. 

Interview with Douglas Holtz-Eakin: Career, Budgets, Deficits

Mark A. Wynne of the Dallas Fed has one-hour interview: \”Douglas Holtz-Eakin on Economic Projections, Deficits and Climate Change\” (December 12, 2019). Holtz-Eakin has had an eminent academic career at Columbia and currently at Syracuse, but he is perhaps most widely lfor his time as head of the Congressional Budget Office from 2003-2005. Audio is available, but no full transcript. Here are some comments from Holtz-Eakin:

Why I became an economist

I’m an economist because in my senior year in college, I had a good adviser who pulled me aside and said, “You’re not ready to have a job. You should go to graduate school.” I was a math and econ double major, so I applied to all the math schools I could, all the econ schools I could. I got into math schools. And I got into a couple of econ schools with a little bit of [financial] aid, but it wasn’t looking great. And then fairly late in the game, Princeton [University] admitted me with a full ride and a stipend. So, they paid me to go to graduate school. I firmly believe I became an economist because of a clerical error somewhere.

As a staff economist at the Council of Economic Advisers in the early 2000s

At the White House, I found out that two things were true. No. 1, I would go into these meetings and I found that I was really teaching economics to the people who were lawyers and strategists—people who were not economists. And I realized I liked to teach economics, that I had been right about that instinct. The second thing I found out was that the academic research was super important. You invest a lot in research because in the policy process, people can and will say anything to get what they want. The only thing that checks them is the large amount of professional research out there that says, “There are a lot of things that could happen; that’s not one of them.”

At the Congressional Budget Office

The CBO was created by the Budget Act in 1974. Its purpose is to give the Congress the information it needs to make budgetary decisions. Think of the CBO as a consulting firm for Congress. It is nonpartisan by statute. There are two things that CBO directors cannot do: They cannot give policy advice, and they cannot pick sides.

In 2000, it appeared that the U.S. budget was actually in balance. There were projections of surpluses as far as the eye could see. I spent most of my time at CBO explaining why they were wrong. …. It was one of the situations where I know I was doing my job because nobody liked me.

Addressing the Deficit

Right now, there’s no way around it. To my friends on the right, I say, “We’re going to have to raise revenue. I’m sorry, you can’t grow your way out of this. It won’t work. There’s no way.” Then the question is, how can you intelligently raise revenue? The discussion should be about the quality of tax policy, not taxes up or down.

And to my friends on the left, I say, “You’re going to have to deal with the Social Security system that we currently have before we do big expansions.” It is just wrong that the Social Security program is right now scheduled to exhaust the trust fund in about 12 years. And at that point, if nothing is done, there would be a 25 percent across-the-board cut to people’s benefits in their retirements. That’s wrong. That’s no way to run a pension program.

Update on Carbon Capture and Storage

In my experience, carbon capture and storage (CCS) is often viewed as a quirky technological possibility, not of central significance to the overall issue of reducing the rise of atmospheric carbon. This perception is incorrect. The Global CCS Institute provides an overview in Global Status of CCS 2019. As the report notes: 

Analysis by the Intergovernmental Panel on Climate Change (IPCC) and International Energy Agency (IEA) has consistently shown that CCS is an essential part of the lowest cost path towards meeting climate targets. The IPCC\’s Fifth Annual Assessment Report (AR5) showed that excluding CCS from the portfolio of technologies used to reduce emissions would lead to a doubling in cost – the largest cost increase from the exclusion of any technology. … To limit global temperature rises to 1.5°C above pre-industrial levels, the world must reach net zero emissions by around 2050. Most modelling scenarios show that this will require significant deployment of negative emissions technologies. Bioenergy with CCS (BECCS) is one of the few available that can deliver to the necessary scale.

In the report, Nicholas Stern adds:

One of the opportunities that we have at hand, carbon capture, use and storage, will play a vital role as indicated by the Intergovernmental Panel on Climate Change’s Report on Global Warming of 1.5 ºC. The diversity of its applications is immense; from direct air capture delivering negative emissions, to the ability to prevent infrastructure emissions lock-ins by abating existing infrastructure in the industrial and power sectors, capturing, using and storing carbon will be a vital instrument in reaching net-zero emissions goals.

The report offers a detailed overview of report of CCS facilities around the world in various stages from early development to actually operational. However, the actual operational CCS facilities are currently measured in dozens, when they need to be measured in hundreds or thousands. Sally Benson, a professor of energy engineering at Stanford, writes about CCUS, or \”carbon capture, utilization, and storage\”:

Over the last 20 years, the role of carbon capture and storage has evolved from “nice to have,” to “necessary,” and now, CCUS is inevitable. We need Gt* scale CCUS now. … [W]e have an ambition gap between the rate that CCUS is growing today – about 10 per cent a year, compared to the rate needed to reach a Gt/year by 2040. If we could just double scale-up rate to 20 per cent per year, and sustain that to 2040, bingo, we reach 1 Gt/year by 2040.

Here\’s a partial sense of the range of possibilities for CCS. Perhaps the most obvious is to apply this technology at places where fossil fuels are being burned. For example, when burning natural gas:

Eliminating almost all greenhouse gas emissions along the natural gas value chain is necessary if we are to meet the target of net-zero emissions by mid-century. More than 700 Mtpa of indirect CO2 emissions – almost equal to the emissions of Germany in 2016 – could be eliminated from oil and gas operations through the application of CCS. Applying CCS at gas processing facilities costs around USD 20-25 per tonne of CO2.

But the technology can also be applied to other industrial operations that emit high levels of carbon, including factories that make cement, steel, and certain chemicals.

The good news is that as CCS expands in various ways, the cost of reducing carbon emissions in this way is falling.

There is strong evidence that capture costs have already reduced. … Two of the projects, Boundary Dam and Petra Nova are operating today. The cost of capture reduced from over USD100 per tonne CO2 at the Boundary Dam facility to below USD per tonne CO2 for the Petra Nova facility, some three years later. The most recent studies show capture costs (also using mature amine-based capture systems) for facilities that plan to commence operation in 2024-28, cluster around USD 43 per tonne of CO2. New technologies at pilot plant scale promise capture  costs around USD33 per tonne of CO2.

Just storing carbon underground is certain possible, but from an economic view, it\’s more productive to store the carbon in a form that gets some additional economic value from it.  One of the most prominent existing economic uses for stored CO2 is–heavy irony alert here–injecting it underground to extract more oil. But there are other examples: captured CO2 might be used as a feedstock for certain chemicals, polymers, concrete, and even fuels.

One possible future use for captured carbon emphasized in the report is in the production of clean hydrogen. There some recent buzz that clean-burning hydrogen might be greatly expanded in the future, but the question is how that hydrogen gets produced in first place. The report notes:

Currently, 98 per cent of global hydrogen production is from unabated fossil fuels, around three quarters stemming from natural gas. CO2 emissions from its production are approximately 830 Mtpa, equivalent to the annual emissions of the UK in 201894. … Low-carbon hydrogen has been produced through gas reforming and coal gasification with CCS, for almost two decades. For example, the Great Plains Synfuel Plant in North Dakota, US, commenced operation in 2000 and produces approximately 1,300 tonnes of hydrogen (in the form of hydrogen rich syngas) per day, from brown coal.  Hydrogen produced from coal or gas with CCS is the lowest cost clean hydrogen by a significant margin and requires less than one tenth of the electricity needed by electrolysis.

Finally, discussions of this topic inevitably veer into the possibility of \”bio-energy with carbon capture and storage,\” or BECCS. This notion here is to burn biomass of some kind for energy–like wood pellets–but then to capture and store the carbon. This would be a form of \”negative carbon\” energy. This approach might only make economic sense in a limited number of locations, but it\’s because it\’s an actual subtraction of carbon from the atmosphere, it\’s worth keeping in mind. 

I\’ll just add that focusing carbon capture and storage on the industrial sector does not exhaust the possibilities. For example, I\’ve noted some evidence that saving the whales could lead to increased ocean plankton and store additional carbon in that way.  As another approach, certain agricultural methods have the result of sequestering more carbon in the soil, as discussed by Greg Ip in the Wall Street Journal, \”How to Get Rid of Carbon Emissions: Pay Farmers to Bury Them,\” September 11, 2019).  For exampleA Boston-based company called Indigo Ag Inc. is setting up a market where those who want or need to reduce their carbon emissions can pay farmers $15 to follow practices that will have the effect of sequestering  one metric ton of carbon dioxide in the soil.

Perhaps the most aggressive and unproven possibilities for carbon capture and storage involve finding ways to extract carbon from the air directly. Of course, such methods only work if most of the energy going into them comes from non-carbon sources. For example, there\’s a geothermal power plant in Iceland which is drawing carbon out of the air–admittedly at modest scale–and then using a chemical process to turn the carbon into rock. If this technology advances and becomes more cost-effective, it will be interesting to calculate whether geothermal energy sites around the world might be used for this purpose.

Another technology that seems like a promising long-shot (if that\’s not a contradiction in terms) is being pushed by Project Vesta. Their website is full of position papers and background research arguing that if olivine rock was spread on beaches, it will undergo a chemical reaction as it mixes with seawater that absorbs CO2 from the atmosphere and turns it into rock. The project offers theoretical calculations that this approach could negate all of the current levels of atmospheric carbon emissions at a plausible cost–but they are just now working on a \”test beach\” experiment to get hard evidence.

Other than the enthusiasts at Project Vesta, none of the sources described here make any claim that carbon capture and storage can be a sufficient answer to holding down the rise of atmospheric carbon. But many of the sources here see it as a necessary and underappreciated part of the overall puzzle. Ultimately, the growth of a CCS industry is going to depend heavily on government. If the government provides clear incentives for carbon emissions to decline, like a carbon tax or regulations, then CCS will grow faster. If the government in addition makes it straightforward to site carbon storage areas without undue regulatory delays or legal risks, and add some financial incentives for such investments, then CCS will grow faster still.

Interview with Daron Acemoglu: Wellsprings of Growth

Tyler Cowen conducts one of his rich and illuminating \”Conversations with Tyler\” in \”Daron Acemoglu on the Struggle Between State and Society\” (Medium.com, December 4, 2019, both audio and transcript available). Here are a few tidbits from Acemoglu, but there\’s much more at the link:

On interrelationships between democracy and growth 

The early work on democracy — such an important topic, and people were really excited about understanding what democracy does. … But you have to be careful. Of course, if you judge whether democracy is successful or not by comparing China to Switzerland, you’re going to get not very meaningful answers. That’s like a chief case of comparing apples and oranges. I have written a couple of papers on this, and the econometrics here really matters for a variety of reasons. …

You also want to be careful because it turns out that there’s one surefire predictor of when a country democratizes — it’s economic crisis. Dictatorships don’t go often because they decide, well, citizens should rule themselves. They collapse, and they collapse more likely in the midst of severe economic recessions.

So you really have to take care of these things. And when you do that, you find two things that are both amazingly robust. One is that democracies grow faster. So when a country democratizes, for another three or four years, it takes time for it to get out of the crisis. Then it starts a much faster growth process. It’s not going to make Nigeria turn into Switzerland, but a country that democratizes adds about 20 to 25 percent more to its GDP per capita.

When a country democratizes, for another three or four years, it takes time for it to get out of the crisis. Then it starts a much faster growth process. It’s not going to make Nigeria turn into Switzerland, but a country that democratizes adds about 20 to 25 percent more to its GDP per capita.

And then the second thing … one of the most important mechanisms for that seems to be that when you democratize, you tax more, so the taxation, the budgets go up. And you spend more, especially on education and health, so the health of the population improves. Child mortality is one of the things that improves very fast. Primary and secondary enrollment improves a little bit more slowly, but it improves very steadily.

On the interaction of European ideas and settlers with local conditions in the 17th and 18th century

But it isn’t also a straight line to say that Europeans brought ideas and that’s what really changed the trajectory in a good way.

Europeans settled in some numbers in Barbados. In 1680, about 10,000 people in Barbados had European ancestry and probably about 2,000, 3,000 were British. But these people, who benefited mightily from the plantation complex and from about 40,000 people being chattel slaves, did not have any idea of introducing a Declaration of Independence. They actually established a very draconian regime. Executions were commonplace. All the power was concentrated in the hands of about 150 families that were to be plantation owners.

And the way that the independence declaration evolved and other things evolved in the US was actually in reaction to British power. The important story, which, of course, everybody knows, is that Declaration of Independence was in order for the Americans to reject the European imposition to some degree.

But actually, the story goes much further back if you take the first semi-successful colony in Jamestown. What makes Jamestown unique is not that . . . or actually, very common in some sense, but a first in history. It actually broke completely new ground institutionally to introduce the headright system, smallholder property, to introduce the general assembly.

And none of this was because the Virginia Company, who colonized and owned the place, wanted to. They wanted to set up a system very similar to the one in Barbados or Jamaica. It was against the wishes of their British masters because the settlers said, “No, we’re not going to go along with that.” And they had enough political power to do it.
Therefore, the story is more complicated. It’s not that the European ideas directly came. It’s the European ideas interacting with the local conditions, and often the success came from local conditions making European strategies of dominance unsuccessful.

It’s how do you make states and societies work together? If you look at most of human history, it doesn’t work. Either you have stateless societies, no centralized law, no third-party enforcement — so, lawlessness and lots of other costs — or you have despotic states that don’t listen to society and impose their will, often very repressively and otherwise, on societies. But the original part of the book is to say, actually, squeezed in between these two things, there is this Red Queen dynamics, the corridor where states and societies sort of support each other, and that supporting process is a contentious process. It’s not that they are happily cooperating. They are trying to race with each other. Each is trying to have the upper hand. But as long as that competition doesn’t become destructive, it can actually add to the capacity of both of them. ….

Why is it that certain representative institutions develop most strongly in Europe and certain aspects of liberty — they did not only develop in Europe; the demand certainly was there throughout the world — but they took strongest root in Europe.  … It’s actually the meeting of bottom-up political participation and the institutions of the Roman Empire, even after the Western Roman Empire had collapsed. So where does the bottom-up element come from?

Well, it comes from the Germanic tribes who were raiding and sometimes fighting with, sometimes fighting against the Romans. But after the Western Roman Empire collapsed, especially the Franks, sorry, amalgamation of several other Germanic tribes really took root throughout that area. The Franks did not have much human capital. They did not have any educational institutions. They didn’t know how to read and write. They learned that from the church after they sort of adopted Christianity. But what they had were these traditions of organizing their politics.

This is recognized by Julius Caesar and Tacitus during the times of the Roman empires and the early chroniclers of the Merovingians and the Carolingians. They had this view that the chiefs had to serve the people. So the chiefs were often temporarily appointed during wartime, and out of war, they held back. They had all these assemblies. Some historians call them the assembly politics of the German tribe. Everything had to be done through assemblies, and those traditions were the ones that they tried to fuse, and quite successfully in the hands of Clovis and Charlemagne, for example, together with the much more top-down Roman Empire’s institutions.

Paul Volcker: 1927-2019

Paul Volcker, who was chair of the Federal Reserve from August 1979 to August 1987.has died. He is generally credited, or in some cases blamed, for the set of monetary policies which both ended the inflationary period of the 1970s but also brought on the very deep double-dip recessions of 1980 and 1981-2. The New York Times obituary is here.

For an overview of those times and how Volcker perceived the choices he was facing, a useful starting point is \”An Interview with Paul Volcker,\” conducted by Martin Feldstein, which appeared in the Fall 2013 issue of the Journal of Economic Perspectives. Here\’s a flavor:

It made a profound impression on me, if nobody else, that Arthur Burns titled his valedictory speech “The Anguish of Central Banking” (Burns 1979). That was a long lament about how, in the economic and political setting of the times, the Federal Reserve, and by extension presumably any central bank, could not exercise enough eserve, and by extension presumably any central bank, could not exercise enough restraint to keep inflation under control. It was a pretty sad story. If you were going to follow that line, you were going to give up, I  guess. I didn’t think you could give up.  If I was in that job, that was the challenge as the Chairman of the Federal Reserve. You inherit a certain challenge … 

The favorite word at the time, which was very popular within the Federal Reserve, but I think popular in the academic community generally, was “gradualism.” I don’t quite remember them saying, “Don’t bring it down at all.”But instead, it was “Take it easy. It will be a job of, I don’t know, years, decades, whatever, and you can do it without hurting the economy.” I never thought that was realistic. The inflationary process itself brought so many dislocations, and stresses and strains that you were going to have a recession sooner or later. 

The idea that this was just going to go on indefinitely, and the inflation rate got up to 15 percent, it was  going to be 20 percent the next year. One little story (I think of all these stories):  Shortly after we began the disinflation, somebody, I think Arthur Levitt who was the head of the American Stock  Exchange, brought in some businessmen—they tend to be small businessmen—To talk to me at the Federal Reserve. I had them for lunch, and I gave them my little patter about, “This is going to be tough, but we’re going to stick with it, and the inflation rate is going to come down,” and so forth. The first guy that  said, “That’s all very fine, Mr. Volcker, but I just came from a labor negotiation in which I agreed to a 13 percent wage increase for the next three years for the next three years for my employees. I’m very happy with my settlement.” I always wondered whether he was very happy two years later on. But that was symbolic of the depths. He was happy at a 13 percent wage increase.

For context, the US inflation rate as measured by the Consumer Price Index fell from about 13% in 1980 to 3% by 1983.

I\’m also reminded of a story that Mervyn King, who for a time chaired the Bank of England, once told about what happens when two of the world\’s preeminent central bankers try to split a dinner check.  King told the story this way:

I first met Paul in 1991 just after I joined the Bank of England. He came to London and asked Marjorie Deane of the Economist magazine to arrange a dinner with the new central banker. The story of that dinner has never been told in public before. We dined in what was then Princess Diana’s favourite restaurant, and at the end of the evening Paul attempted to pay the bill. Paul carried neither cash nor credit cards, but only a cheque book, and a dollar cheque at that. Unfortunately, the restaurant would not accept it. So I paid with a sterling credit card and Paul gave me a US dollar cheque. This suited us both because I had just opened an account at the Bank of England and been asked, rather sniffily, how I intended to open my account. What better response than to say that I would open the account by depositing a cheque from the recently retired Chairman of the Federal Reserve. I basked in this reflected glory for two weeks. Then I received a letter from the Chief Cashier’s office saying that most unfortunately the cheque had bounced. Consternation! It turned out that Paul had forgotten to date the cheque. What to do? Do you really write to a former Chairman pointing out that his cheque had bounced? Do you simply accept the financial loss? After some thought, I hit upon the perfect solution. I dated the cheque myself and returned it to the Bank of England. They accepted it without question. I am hopeful that the statute of limitations is well past. But the episode taught me a lifelong lesson: to be effective, regulation should focus on substance not form.

Why Mississippi Deserves More Federal Aid, and Massachusetts Less

Imagine two school districts in a metropolitan area in the same state: one with higher incomes and property values, and the other with lower incomes and property values. Say that the schools are funded by local property taxes. Thus, if the same property tax rate applies to both school districts, children in the district with higher incomes and property values will have a lot more spent on their education than children in the district with lower incomes and property taxes.

For some decades now, it has been widely accepted that it is appropriate to make some adjustments to this outcome. The general sense is that if two districts impose the same level of tax effort, as measured by the tax rate, then the per student funding in those districts should also be the same. In that spirit, many states now provide the same basic per-student funding for all schools, which involves some redistribution away from school districts with higher property values and higher incomes to others. Yes, school funding inequalities persist, as parents in higher-income districts find ways to offer additional support to schools, either by imposing higher property taxes on themselves or through other methods.  But the inequality that would have existed if every school district was funded only by local tax revenues from that district is diminished.

A similar argument applies at the level of states and the federal government. Joshua T. McCabe makes the case in \”Rich State, Poor State: The Case For Reforming Federal Grants\”  (Niskanen Center, December 2019).

The US Treasury does a ranking of states by \”Total Taxable Resources,\” which is basically a measure of the revenue streams that a state could tax. Here are states as ranked by Total Taxable Resources on a per capita basis. Not surprisingly, the states with the highest level of Total Taxable Resources are places like Connecticut, New York, Delaware, and Massachusetts, with California and Washington state also ranking high. At the bottom end are states like Mississippi, West Virginia, Alabama, and Arkansas.

McCabe then compares what tax revenues are actually collected by states with their Total Taxable Resources, which can be viewed as a measure of \”fiscal effort.\” An unexpected (to me) result emerges. Some of the states with the highest levels of Total Taxable Resources, like Connecticut and Delaware, rank near the bottom on fiscal effort. Some states with the lowest levels of Total Taxable Resources, like Mississippi and Alabama, rank near the top on fiscal effort. The overall pattern is mixed. For example, New York state is near the top both in Total Taxable Resources and in fiscal effort.

McCabe sums up overall argument this way:

One of the enduring myths of American political discourse is that many states in struggling regions have mistakenly pursued a “low-tax, low-service” growth strategy while thriving regions have wisely pursued a “high-tax, high-service” strategy. Massachusetts, for example, spends twice as much per pupil on education as Mississippi. As a consequence, Mississippi remains mired in poverty while Massachusetts prospers. The problem is that this story gets it backwards. Massachusetts can afford to spend more precisely because it is prosperous. Mississippi is limited precisely because it is poor. The two states look very similar in terms of top marginal income tax rates (5 percent in Mississippi; 5.05 percent in Massachusetts) and sales tax rates (7 percent in Mississippi; 6.25 percent in Massachusetts). In terms of fiscal effort, Mississippi actually dedicates a larger proportion of its total taxable resources to education in particular (3.19 percent in Mississippi; 2.82 percent in Massachusetts) and public spending in general (16.7 percent in Mississippi; 12.2 percent in Massachusetts). In reality, being poor means Mississippi generates less revenue with more effort than wealthy Massachusetts. … Criticisms of poor states as “low tax, low service” are fundamentally mistaken. In general, poor states exert similar fiscal effort as rich states, but generate a fraction of the revenue for education and social assistance due to the simple fact that they’re poor.

The US economy has been experiencing a rising level of regional disparities. Moreover, governments of other countries do considerably more than the US. to equalize revenues across state- or provincial-level government. McCabe writes: 

Political scientist Jonathan Rodden has done the most comprehensive analysis of fiscal federalism around the world. In contrast to the rhetoric of pundits who claim there is a massive redistribution from rich (often blue) to poor (often red) states, Rodden finds that the U.S is the worst among rich democracies in terms of progressively allocating more federal grant funding to states with limited fiscal capacity.

What might such changes look like in a US context? McCabe points out that for some major federal programs, the federal government makes little nor no effort to help states with lower Total Taxable Resources. For example, here\’s federal Medicaid funding to states on a per-capita basis: notice that there is no particular pattern where states with less per capita Total Taxable Revenue get more federal Medicaid support.

In the Temporary Assistance to Needy Families (TANF) program, federal spending per child actually goes more to states that already have high per capita Total Taxable Revenue

The politics of this proposal are interesting to contemplate. For example, it\’s common to note that lower-income Republican-leaning states often have been hesitant to expand Medicaid, as they are allowed to do under the 2010 Patient Protection and Affordable Care Act. McCabe agrees that Republican intransigence toward the proposal is part of the issue, but also points out that lower-income states with lower levels of Total Taxable Revenue have been hesitant to commit to making matching-grant payments for Medicaid for several decades now.

Many of the higher-income states lean Democrat, and the rhetoric of that party often emphasizes the importance of government acting to create greater equity. Many of the lower-income states lean Republican, and the rhetoric of that party often emphasizes the importance of localities and states having a high degree of self-reliant autonomy. One suspects that a proposal which would have the effect of focusing federal spending away from states with higher taxable resources per capita (say, Connecticut, Massachusetts, Delaware, New York, and New Jersey) and toward states with lower taxable resources per capita (say, Mississippi, West Virginia, Alabama, Arkansas, Idaho) will cause enthusiasm to wane both for redistribution in the first set of states and for self-reliance in the second set of states.

Those who would like redistribution programs like Medicaid and TANF which have shared federal-state funding to become more popular might do well to consider the possibility that if they were designed to be friendlier to states with low levels of Total Taxable Revenue, they might spread more easily. 

Revisiting the Industrial Policy Question in East Asia

Many of the world\’s main economic growth success stories are clustered in east Asia, including Japan, South Korea, Taiwan, Malaysia, Singapore, and now of course China. Of course, everyone wants to claim credit for success stories: in particular, it\’s clear that the governments of these countries have often intervened in their economies, and so they are commonly cited as examples of how \”industrial policy,\” rather than just a \”free market,\” is needed for rapid growth. Reda Cherif and Fuad Hasanov resurrect these arguments in \”The Return of the Policy That Shall Not Be Named: Principles of Industrial Policy\” (IMF Working Paper WP/19/74, March 2019)

As a starting point to sorting out these arguments, it\’s useful to point out that any proposed dichotomy between \”industrial policy\” and the \”free market\” is a gross oversimplification.  It is widely agreed that for sustained growth, a country must address economic fundamentals like high rate of investment in physical capital, education, and health; macroeconomic stability; a well-functioning legal infrastructure that supports an environment friendly to starting and running businesses; pro-competition policies; openness to trade; support for research and development and intellectual property, and so on. Government will play a prominent role in many of these areas, so they cannot be characterized as \”free market.\” However, these policies do not provide direct favors to any existing company or industry; indeed, they may set the stage for types of growth that disrupt or even bankrupt existing companies and industries. Thus, they are not really \”industrial policy,\” either.

If a government decides to carry out policies that favor certain particular industries, rather than just being satisfied with preparing the ground for growth to occur, it will need to make three choices: what industry to favor, what policy tools will be used to favor those industries, and how to decide when these policy tools should be ended (in particular, when the tools have not worked).

Cherif and Hasanov describe their preferred version of industrial policy in this way:

We argue that the success of the Asian Miracles is based on three key principles that constitute “True Industrial Policy,” which we describe as Technology and Innovation Policy (TIP). … (i) state intervention to fix market failures that preclude the emergence of domestic producers in sophisticated industries early on, beyond the initial comparative advantage; (ii) export orientation, in contrast to the typical failed “industrial policy” of the 1960s–1970s, which was mostly import substitution industrialization (ISI); and (iii) the pursuit of fierce competition both abroad and domestically with strict accountability.

They argue that an emphasis on economic fundamentals is a \”snail crawl\” approach to growth, while their version of targeted industrial policy is the \”moon shot\” approach.

You can read their essay for the details of the case in favor of this approach. Here, I\’ll just point out that the arguments here come in two main categories: the question of whether government are in fact likely to enact the specific type of industrial policy being recommended; and the likely marginal effect of such an industrial policy, over and above a policy of sound economic fundamentals.

Concerning the question of whether a government can enact should enact a policy to favor specific industries, it\’s wroth emphasizing that their three standards for industrial policy are quite specific, and by no means a blanket endorsement of widespread government intervention in the economy.

For example, their preferred form of industrial policy pushes domestic producers into \”sophisticated\” industries they would not otherwise have tried. Thus, this argument for industrial policy does not offer support for policies that favor production of existing goods, or for just helping industries as they currently exist to earn higher profits. It does not favor an industrial policy aimed at saving existing jobs. Thus, the challenge is whether government can

Their second goal focuses on the idea that successful industrial policy should focus on expanding export sales, not on trying to replace imported products. Thus, they are explicitly disavowing the \”import substitution\” approach to economic development that has historically been popular in Latin America and elsewhere. For their preferred type of industrial policy, \”tariffs are neither necessary nor sufficient to succeed,\” although tariffs were often used in combination with more direct financial and credit incentives.

The third goal emphasizes that this growth of \”sophisticated\” industries should involve both fierce competition, both at home and abroad. The focus on domestic competition means that this approach does not favor government nationalization of an industry, or trying to create a \”national champion\” firm to compete in foreign markets. As they write: \”While specific industries may get support, intense competition among domestic firms was highly encouraged in domestic and international markets,\” 

When they mention \”strict accountability,\” what they have in mind is that industrial policy in the 1970s and 1980s in places like South Korea and Taiwan set up specific targets that had to be met for a firm to benefit from industrial policy. These targets might involve a certain level of export sales that had to be reached, along with certain levels of investment or R&D effort, or setting up chains of domestic suppliers. \”Accountability\” means that if these pre-determined targets were not met in a timely manner, the benefits from the industrial policy could and were cut off.

The notion that this very specific kind of industrial policy can benefit an economy isn\’t especially new. Cherif and Hasanov mention a prominent book-length report on this subject published by the World Bank back in 1993, The East Asian Miracle: Economic Growth and Public Policy.  The report argues that by far the main causes of rapid growth in the countries of East Asia were high levels of investment in physical capital, human capital, and technology, occurring in a context of an economy that emphasized market incentives, including intense domestic competition and macroeconomic stability. However, the World Bank report also took care to note (citations omitted): 

In most of these economies, in one form or another, the government intervened–systematically and through multiple channels–to foster development, and in some cases the development of specific industries. Policy interventions took many forms: targeting and subsidizing credit to selected industries, keeping deposit rates low and maintaining ceilings on borrowing rates to increase profits and retained earnings, protecting domestic import substitutes, subsidizing declining industries, establishing and financially supporting government banks, making public investments in applied research, establishing firm- and industry-specific export markets, developing export marketing institutions, and sharing information widely between public and private sectors. Some industries were promoted, while others were not. … 

At least some of these interventions violate the dictum of establishing for the private sector a level playing field, a neutral incentives regime. Yet these strategies of selective promotion were closely associated with high rates of private investment and, in the fastest-growing economies, high rates of productivity growth. Were some selective interventions, in fact, good for growth? …

Our judgment is that in a few economies, mainly in Northeast Asia, in some instances, government interventions resulted in higher and more equal growth than otherwise would have occurred. However, the prerequisites for success were so rigorous that policymakers seeking to follow similar paths in other developing economies have often met with failure. What were these prerequisites? First, governments in Northeast Asia developed institutional mechanisms which allowed them to establish clear performance criteria for selective interventions and to monitor performance. Intervention has taken place in an unusually disciplined and performance-based manner. Second, the costs of interventions, both explicit and implicit, did nor become excessive. When fiscal costs threatened the macroeconomic stability of Korea and Malaysia during their heavy and chemical industries drives, governments pulled back In Japan the Ministry of Finance acted as a check on the ability of the Ministry of International Trade and Industry to carry out subsidy policies, and in Indonesia and Thailand balanced budget laws and legislative procedures constrained the scope for subsidies. … Price distortions arising from selective interventions were also less extreme than in many developing economies. In the newly industrializing economies of Southeast Asia, government interventions played a much less prominent and frequently less constructive role in economic success, while adherence to policy fundamentals remained important …

I quote here at some length to emphasize that it has been a commonly held and conventional World Bank view for some time that certain focused, disciplined, and strictly accountable industrial policies with an export-push focus can work. It\’s also possible that some of the most important interventions in these economics were policies that were anti-free market but not industry-specific, like \”keeping deposit rates low and maintaining ceilings on borrowing rates to increase profits and retained earnings\” or \”establishing and financially supporting government banks.\”
But overall, the practical policy question is whether one wants to encourage governments of developing countries to enact \”industrial policy,\” given the risk that the approach may be neither focused nor disciplined nor accountable. After all, the world is full of countries that have announced the implementation of industrial policies that were not only unsuccessful, but often wasteful and  counterproductive. 
The other main question about industrial policy is how to decide whether, for example, most of the rapid growth in east Asia was due to the extraordinary gains of those countries in economic fundamentals and how much was due to specific industrial policies. The 1993 World Bank report argued that overall, some of East Asia\’s industrial policies succeeded, others failed, and overall the industry-specific policies didn\’t much affect the direction of growth. The report says: 

Most East Asian governments have pursued sector-specific industrial policies to some degrce. The best-known instances include Japan\’s heavy industry promotion policies of the 1950s and the subsequent imitation of these policies in Korea. These policies included import protection as well as subsidies for capital and other imported input. Malaysia, Singapore, Taiwan, China, and even Hong Kong have also established programs–typically with more moderate incentives–to accelerate development of advanced industries. Despite these actions we find very little evidence that industrial polices have affected either the sectoral structure of industry or rates of productivity change. Indeed, industrial structures in Japan, Korea, and Taiwan, China, have evolved during the past thirty years as we would expect given factor-based comparative advantage and changing factor endowments … 

Of course, it\’s very hard to separate out different factors that contribute to a country\’s economic growth and draw lessons that can apply to other countries. Cherif and Hasanov make a case that for Hong Kong, Korea, Singapore, and Taiwan, the industry-targeted incentives were a key and central component. I\’m skeptical. My own (only lightly informed) sense is that this case is stronger case for Korea than for the other three. Also, it seems unlikely that the industry-specific incentives would have accomplished much if the strong economic fundamentals had not been in place. 

One way or another, when explaining the growth pattern of specific economies, it\’s hard to rule out an element of luck. Cherif and Hasanov are quick to note that when countries have enacted something close to their preferred version of industrial policy, but perform poorly, it could just be a result of bad luck or bad timing. Fair enough. But it may also be that the the East Asian economies benefited from good luck in taking certain policy steps at just the right historical moment. 
At the current historical moment, where political currents are running strongly against an expansion of international trade and the technologies of automation and information technology are transforming manufacturing, it\’s not clear that import substitution policies in the style of East Asia–focused on exports of increasingly sophisticated manufacturing products–is a workable development strategy for the near future. 

About Millennials

The \”Millennials\” are commonly defined as the generation that grew up and came of age in the opening decades of the 21st century: that is, those born from approximately 1981 to 1996.
Every generation finds itself caught in the twists and pressures of a different set of social and economic challenges, and the Millennials are no exception. For example, those who were struggling to enter the labor market as young adults during the Great Recession of 2007-2009 and the sluggish recovery in its aftermath were Millennials. Those who were trying to buy houses and and attend college in the 2000s, after the prices of those goods had been climbing in recent decades, were Millennials. Some long-run trends, like diminished labor market opportunities for low-skilled workers, continued for Millennials as well.

The most recent issue of Pathways magazine from the Stanford Center on Poverty and Inequality has a collection of short fact-based essays about Millenials, who are now adults in the age bracket from 23 to 38. Here are a few findings that jumped out at me.

It\’s perhaps no surprise that Millennials are more likely to identify as multiracial or to adopt unconventional gender identities. However, according to Sasha Shen Johfre and Aliya Saperstein,  \”they are not outpacing previous generations in rejecting race and gender stereotypes. Their attitudes toward women’s roles and perceptions of black Americans are quite similar to those of baby boomers or Gen Xers.\”

A number of the essays focus on labor market outcomes for the Millennials. Harry Holzer describes lower labor force participation of Millennials, especially low-skilled male workers.

Labor force activity has declined for all prime-age workers, but the decline among young workers has been especially rapid. This means that millennials. who are currently 25–34 years old are working less than Gen Xers at the same age. Declines are most evident among men, though women’s labor force activity is also lower. Large gaps by education remain, with the highest labor force participation among college graduates.

Florencia Torche and Amy L. Johnson write: \”The payoff to a college degree—in terms of earnings and full-time work—is as high for millennials as it’s ever been. But there is a substantial earnings gap between those who are and aren’t college educated. Millennials with no more than a high school diploma have much lower earnings in early adulthood than prior generations.\”

Michael Hout writes: \”American men and women born since 1980—the millennials—have been less upwardly mobile than previous generations of Americans. The growth of white-collar and professional employment resulted in relatively high occupational status for the parents of millennials. Because that transition raised parents’ status, it set a higher target for millennials to hit.\” As Hout points out, a trend toward less social mobility was already apparent for those born in the 1960s and 1970s, but it has become stronger since then.

Kim Weeden adds some evidence on occupational segregation: \”The gender segregation of occupations is less pronounced among millennials than among any other generation in recent U.S. history. By contrast, millennials are experiencing just as much racial and ethnic occupational segregation as prior generations, even though millennials are less tolerant of overt expressions of racism. Both types of occupational segregation—gender and racial-ethnic—are very consequential for wages. Among millennials, occupational segregation accounts for 28 percent of the gender wage gap and 39 to 49 percent of racial wage gaps.\”

Susan Dynarski argues that Millennials have become the \”student debt generation.\”

Over the last several decades, more students have taken on debt to pay for school, and the size of their debt has grown. According to the National Center for Education Statistics, 46 percent of students enrolled in all degree-granting schools had student loans in 2016, a percentage that pertains to the tail end of the millennial generation. This is up from 40 percent in 2000, when Generation X represented much of the college population. Over the same period, the average loan amount increased by nearly $2,000, from $5,300 in 2000 to $7,200 in 2016. … As shown in Figure 1, the default rate has increased among all types of borrowers, although the increase is far less pronounced among borrowers for selective schools and graduate schools.2 The simple conclusion: Relative to Generation
X, millennials indeed took out more student loans, took out larger student loans, and defaulted more frequently.

Darrick Hamilton and Christopher Famighetti discuss housing: \”Young millennials have lower rates of homeownership than Generation X, baby boomers, and the Silent Generation at comparable ages. We have to reach back to a generation born nearly a century ago—the Greatest Generation—to find homeownership rates lower than those found today among millennials. The racial gap in young-adult homeownership is larger for millennials than for any generation in the past century. Although the housing reforms after the civil rights era reduced the racial homeownership gap, all those gains have now been lost.\”

Bruce Western and Jessica Simes point out: \”The recent reversal in overall incarceration rates takes the form of an especially prominent decline in rates of imprisonment for black millennial men in their late 20s. The decline is far less dramatic for other population groups—such as white and Hispanic men—that never experienced the extremely high rates that black men experienced. The imprisonment rate for black millennial men—approximately 4.7 percent—nonetheless remains extremely high.\”

Whose Charitable Giving (And for What Purposes) Gets a Tax Break?

When people who itemize deductions on their taxes donate to charity, they can take a tax deduction. When people who don\’t itemize deductions donate to charity, they don\’t get a tax deduction. Only about 11% of taxpayers itemize deductions, usually those with higher income levels. So the charitable priorities of this group gets a tax break, while others don\’t. Robert Bellafiore lays out these facts and offers some possible policy suggestions in \”Reforming the Charitable Deduction,\” written for the Social Capital Project of the Joint Economic Commitee (SCP REPORT NO. 5-19, November 2019). Here are some background facts:

Overall charitable giving as a share of GDP (including both those who itemize deductions and those who don\’t) had a step-increase in the 1990s, and has stayed at that higher level since then.

However, the share of Americans giving to charity has been declining in the last couple of decades. The bright blue line shows the overall decline, while the rest of the lines show that the decline cuts across income groups.

These patterns have a strong effect on what kind of charitable giving gets a tax break. The figure below shows the total amount given by those in four big income groups. For example, the total amount given by households with under $100,000 in income is considerably more than the total amount given by households with incomes of over $1 million. However, about two-thirds of the giving of those with under $100,000 goes to religious charities, while only about one-sixth of the giving  of those with over $1 million in income goes to religious charities. Those in the lower income group give much more to \”helping meet basic needs,\” while the higher income group gives more to arts, education, and health care nonprofits. Because those in the $1 million and more income category are far more likely to itemize deductions, their charitable priorities get a tax break.

In addition, the share of charitable giving that comes from individuals has been gradually falling, while the share coming from foundations has been rising. 

The report quotes a 2013 article in the Atlantic on the very largest individual gifts given by those with the highest incomes:

Of the 50 largest individual gifts to public charities in 2012, 34 went to educational institutions, the vast majority of them colleges and universities, like Harvard, Columbia, and Berkeley, that cater to the nation’s and the world’s elite. Museums and arts organizations such as the Metropolitan Museum of Art received nine of these major gifts, with the remaining donations spread among medical facilities and fashionable charities like the Central Park Conservancy. Not a single one of them went to a social-service organization or to a charity that principally serves the poor and the dispossessed. More gifts in this group went to elite prep schools (one, to the Hackley School in Tarrytown, New York) than to any of our nation’s largest social-service organizations, including United Way, the Salvation Army, and Feeding America (which got, among them, zero).

The tax deduction for charitable giving reduced federal tax revenues by $56 billion in 2018. Because more than 90% of those with over $200,000 in annual income itemize deductions, the overwhelming share of this tax break goes to those with higher income levels.

The JEC report calculates that for someone in the middle or bottom of the income distribution, giving $100 to charity cost about $100, because so few take the tax break. But for those in the top 1%, giving $100 to charity costs only about $71 after the tax deduction is taken into account. 
What are some proposals for altering the tax break for charitable contributions? The JEC report focuses on ways to expand the tax break for charitable contributions to everyone, not just those who itemize deductions. 
For example, one possible is to allow everyone to deduct their charitable contributions from their income before paying taxes, whether you itemize or not. Another option would be to create a tax credit, which might let everyone take 25% off their income taxes of any amount given to charity. Both of these proposals would expand the tax break for charitable giving, probably resulting in a reduction of $20-$30 billion per year in total federal tax revenues.  They would also probably increase the amount of charitable giving and perhaps also the share of people making charitable contributions. 
The other way to go would be to reduce the existing tax break for charitable giving. For example, the Congressional Budget Office estimates revenue gains from two change to the charitable contributions deduction. One would allow donors to deduct only the amount that is greater than 2% of their adjusted gross income.  Another approach would require that only cash donations to charity would be eligible for a deduction. This change would be aimed at the common practice where someone owns an asset–say, a share of stock–which was bought for a lower price in the past, and then is donated to charity for its higher current value. If that asset first had to be sold and then converted into cash, capital gains taxes would need to be  paid before the charitable donation is made. CBO estimates that either change would increase revenue by about $15 billion per year in a few years.

I\’m not wedded to any particular change, but a situation where the charitable giving of the top 1% gets a tax break for contributions that disproportionately goes to education, arts, and health institutions heavily used by that same 1% doesn\’t seem equitable.

For a brief history of the deduction for charitable contributions, see \”The Charitable Contributions Deduction and Its Historical Evolution\” (September 26, 2019).