The Jobs Problem in India

One of India\’s biggest economic challenges is how new jobs are going to be created. Venkatraman Anantha Nageswaran and Gulzar Natarajan explore the issue in \”India’s Quest for Jobs: A Policy Agenda\” (Carnegie India, September 2019). They write:

The Indian economy is riding the wave of a youth bulge, with two-thirds of the country’s population below age thirty-five. The 2011 census estimated that India’s 10–15 and 10–35 age groups comprise 158 million and 583 million people, respectively. By 2020, India is expected to be the youngest country in the world, with a median age of twenty-nine, compared to thirty-seven for the most populous country, China. In the 2019 general elections, the estimated number of first-time voters was 133 million. Predictably, political parties scrambled to attract youth voters. It is therefore not surprising that, according to several surveys, the parties’ primary concern was job creation. The burgeoning youth population has led to an estimated 10–12 million people entering the workforce each year.6 In addition, the rapidly growing economy is transitioning away from the agricultural sector, with many workers moving into secondary and tertiary sectors. Employing this massive supply of labor is, perhaps, the biggest challenge facing India

India\’s jobs in the future aren\’t going to be in agriculture: as that sector modernizes, it will need fewer workers, not more. A common assumption in the past was that India\’s new jobs would be in big factories, like giant assembly plants or manufacturing facilities. But manufacturing jobs all around the world are under stress from automation, and with trade tensions high around the world, building up an export-oriented network of large factories and assembly plants doesn\’t seem likely. As Nageswaran and Natarajan point out, most of India\’s employment is concentrated in very small  micro-firms in informal, unregulated business. The challenge is to add employment is small and medium formal firms, sector often in industries with a service orientation.

The Sixth Economic Census of India, 2013, which combines all types of enterprises, shows that India had 58.5 million enterprises, which employed 131.9 million workers. Nonemployer, or own account firms, constituted 71.7 percent of these enterprises and 44.3 percent of workers. Further, 55.86 million (or 95.5 percent) of all the enterprises employed just 1–5 workers, 1.83 million (3.1 percent) employed 6–9 workers, and just 0.8 million (1.4 percent) employed ten or more workers … Further, comparing India’s formal and informal manufacturing establishments to Mexico and Indonesia reveals the true scale of India’s challenge within this sector. Enterprises with fewer than ten workers make up nearly 70 percent of the employment share in India, compared to over 50 percent in Indonesia and just 25 percent in Mexico.

To put this in a bit of context, India\’s Census is finding employment of 131.9 million workers, mostly in very small firms. But India as a country has a workforce of over 500 million, and it\’s growing quickly. The other workers are either working for subsistence, in agriculture or cities, or in the informal economy. 

Why has India had such a hard time in creating new small- and medium-sized firms? Part of the answer is a heavy hand of government regulation. 

India is often considered one of the most difficult places to start and run a business. … One of the biggest hurdles that potential enterprises in India face is the complexity of the registration system—all enterprises must register separately with multiple entities of the state and central governments. Under the state government, the enterprise has to register with the labor department (Shop and Establishment Act), the local government (municipal or rural council acts), and the commercial taxes department for indirect tax assessments. There are also several state-specific legislations—the labor department alone has thirty-five legislations. 

Under the central government, enterprises must register with the Ministry of Corporate Affairs for incorporation (Companies Act), the Central Board of Direct Taxes for direct tax assessments, and the labor department’s Employees’ Provident Fund Organization (EPFO) and Employees’ State Insurance Corporation (ESIC). Further, there are registrations specific to sector or occupational categories—for example, manufacturing enterprises with more than ten employees must register with the labor department under the Factories Act.

Based on the application or software employed for each registration, employers also must possess a multitude of numbers: for example, a labor identification number—used to register on the Shram Suvidha Portal, the Ministry of Labor and Employment’s single window for reporting compliances; a company registration number; and a corporate permanent account number. Employees must possess an Aadhaar biometric identity number, an EPFO member number, an ESIC identity number, and a universal account number.

According to current labor laws, service enterprises and factories must maintain twenty-five and forty-five registers, respectively, and file semi-annual and annual returns in duplicate and in hard copy. Furthermore, regular paperwork tends to be convoluted; salary and attendance documents should be simple but instead require tens of entries. In addition to the physical requirements of complying with these regulations—making payments, designing human resource strategies, or meeting physical infrastructure standards—enterprises also have onerous periodic reporting requirements. All these requirements add up to impose prohibitive costs that reduce the success of
these businesses.

This regulatory environment offers a powerful incentive for small firms to remain informal, off-the-record, under-the-radar. A related issue arises because payroll taxes in India are very high–for workers in the formal sector, that is.

Manish Sabharwal, the chairman of TeamLease Services, a staffing company, wrote that salaries of 15,000 rupees a month end up as only 8,000 rupees after all deductions, from both the employer and employee sides. The employer makes deductions for pensions, health insurance, social security, and even a bonus, which are statutorily payable in India and would otherwise increase costs to companies. Consequently, the take-home pay for a worker earning less than 15,000 rupees a month is only 68 percent of their gross wages. Lower-wage workers are far more affected than higher-wage workers, who are protected by the maximum permissible deductions, which lowers the amount of deductions from their gross salary. Further, though international comparisons are often difficult and misleading, a cursory examination suggests that India’s deductions are among the highest in the world and are a deterrent to businesses starting or becoming formal.

Yet another issue is that there are many programs providing support and finance to very small firms. An unintended result is that these firms have an incentive to remain small–so they don\’t have to give up their incentives. 

Gursharan Bhue, Nagpurnanand Prabhala, and Prasanna Tantri point out that firms are willing to forgo growth in order to retain their access to finances. That is, when certain easier financing access is provided to firms below a certain threshold (say, SME firms), they prefer to forgo growth opportunities that would allow them to cross this threshold: “firms that near the threshold for qualification slow down their investments in plant and machinery, other capital expenditure” and experience slower growth in manufacturing activity and output. The authors also point out that when banks are put under pressure to lend to micro, small, and medium enterprises, they fear the fallout of not meeting those lending targets and consequently encourage their borrowers to stay small.

Nageswaran and Natarajan argue that most of India\’s informal firms are \”subsistence\” firms, unlikely to grow. They cite evidence from Andrei Shleifer and Rafael La Porta that few informal firms ever make a transition to formal status. Instead, the goal needs to be to have more firms that are \”born formal,\” and which are run by entrepreneurs who have a vision of how how the firm can grow and hire.  In India, this doesn\’t seem to be happening.   They write:

Chang-Tai Hsieh and Peter Klenow’s latest work, “The Life Cycle of Plants in India and Mexico,” is instructive in its exploration of the life-cycle dynamic of firm growth across countries. They find that, in a sample of eight countries including the United States and Mexico, India is the only  ountry where the average number of employees of firms (in the manufacturing sector) ages 10–14 years is less than that of firms ages 1–5 years. It is generally expected that, as firms remain in business for longer periods, they would naturally employ more workers. In India, however, the inverse has proven true—employment in older firms is less than in younger firms. Hsieh and Klenow also find that the typical Indian firm stagnates or declines over time, with only the handful that reach around twenty years of age showing very slight signs of growth.

What\’s to be done? As is common with emerging market economies, the list of potentially useful policies is a long one. Reforming government regulations, payroll taxes, and financial incentives with the idea of supporting small-but-formal businesses, and not hindering their growth, is one step. Nageswaran and Natarajan also point out that the time needed to fill out tax forms is especially onerous in India.

Ongoing increases in infrastructure for transportation, energy, communications matters a lot. Along with overall support for rising education levels, it may be useful to take the idea of an agricultural extension service–which teaches farmer  how to use new seeds or crop methods–and create a \”business extension service\” that helps teach small firms the basic managerial techniques that can raise their productivity. India\’s government might take steps to help establish an information framework for a national logistics marketplace, which would help organize and smooth the movement of business inputs and consumer goods around the country: \”Amounting to 13 percent of India’s GDP, the country’s logistics costs are some of the highest in the world.\”

But in a broad sense, the job creation problem in India comes down to a more fundamental shift in point of view. Politicians tend to love situations where they can claim credit: a giant new factory opens, or a new power plant. Or at a smaller scale,  politicians will settle for programs that sprinkle subsidies among smaller firms, so those firms that receive such benefits can be claimed as a success story. But if the goal for India\’s future employment growth is to have tens of millions of firms started by well-educated entrepreneurs, this isn\’t going to happen with firm-by-firm direction and subsidies allocated by India\’s central or state governments. Instead, it requires India\’s government to be active and aggressive in creating a general business environment where such firms can arise of their own volition, and it\’s a hard task for any government to get the right mix of acting in some areas while being hands-off in others.

The Pedagogical Lessons and Tradeoffs of Online Higher Education

The Fall 2019 issue of Daedalus is on the subject \”Improving Teaching: Strengthening the College Learning Experience,\” edited by Sandy Baum and Michael S. McPherson. There\’s a lot to digest in the issue, and I\’ll list the table of contents below. But I found myself especially interested by the comments on online education in \”The Human Factor: The Promise & Limits of Online Education,\” by Baum and McPherson, as well as in \”The Future of Undergraduate Education: Will Differences across Sectors Exacerbate Inequality?\” by  Daniel I. Greenstein.

It was now seven years ago, back in 2012, that companies like Coursera, Udacity, and edX announced their plans to revolutionize higher education with \”massive open online courses,\” or MOOCs. While the use of online tools has clearly spread, it seems fair to say that the revolution has not yet arrived. Where does online higher education stand at this point?

On the spread of online classes to this point, Baum and McPherson write (footnotes omitted):

But MOOCs, as attention-getting as they have been, have never been the main source of online education. For-profit, career-oriented institutions and large public universities have been the major providers at the undergraduate level, although several private nonprofit institutions now enroll thousands of online students. Today, more than 40 percent of all undergraduate students take at least one course that is offered purely online; 11 percent–including 12 percent of those in bachelor’s degree programs–study entirely online.

What\’s the evidence on how well online courses teach? A key difference here seems to be that hybrid courses with high online content can work well, but pure online courses have some problems.  Baum and McPherson:

But studies that focus on course completion rates as opposed to test scores generally show weaker outcomes when courses are entirely online.Moreover, recent randomized controlled trials of semester-long college courses have found lower test scores for students in fully online courses than for similar students in traditional classroom settings–but no significant difference in outcomes between those in settings that mix technology with classroom experience and students in fully face-to-face courses. Economist David Figlio and colleagues compared a fully online course to a classroom course; economists William Bowen and Ted Joyce each had teams comparing traditional courses to those replacing some live instructor time with online learning; and labor economist William Alpert and colleagues studied all three models.  The results of these studies are consistent. Classroom instruction time can be reduced without a negative impact on student learning. But eliminating the classroom and moving instruction entirely online appears to lead to lower course completion rates and worse outcomes, even when guidelines are followed for best practices for generating online discussion. The weaker results for students listening to lectures online instead of in a classroom with other students suggests that it may not be just personal attention, but being in a social environment that contributes to student learning. It is also possible that the more structured scheduling of classroom courses is important for some students.

The other big change in online higher education in the last decade or so has been a shift in who is most likely to be offering these courses. Back in 2009, it was mostly for-profits, but that has changed. Greenstein offers a comparison:

Unsurprisingly, by 2009, online instruction outside the for-profit sector was highly concentrated in a relatively small number of outlier institutions. In that year, Western Governor’s University (WGU), established in 1997 by the governors of nineteen states and with a significant grant from the Bill and Melinda Gates Foundation, offered fully online courses to over fifty thousand students, Penn State’s World Campus served twenty-five thousand (9,500 full-time equivalent) students, University of Maryland’s University College had twelve thousand online students, and there were one or two others operating outside the for-profit sector at something bigger than fledgling scale. There were also a number of headlining failures in the not-for-profit sector to point to, failures that reflected outright resistance to the genre, notably at the University of Illinois, where the Global Campus effort announced with great fanfare and with an investment of $10 million collapsed after only three years. By comparison, in the very same year–2009–the for-profit University of Phoenix was nearing its high watermark enrollment of nearly four hundred thousand online students.

Within a decade, the tables had turned. For-profits, under enormous pressure resulting from the Great Recession and a hostile regulatory environment, collapsed, losing as much as a half of all enrollments. Several of the biggest for-profits went out of business (Corinthian Colleges), were bought out by private equity firms (University of Phoenix), merged with not-for-profit institutions looking to accelerate their own online learning initiatives (Kaplan and Purdue Universities), or transitioned from for- to not-for-profit status. Large public universities and community colleges, meanwhile, moved in to pick up some of the slack. WGU grew to one hundred thousand enrollments and continues achieving 10 percent year-on-year growth. Arizona State University serves nearly the same number annually, and the University of Central Florida has grown to nearly sixty thousand students with almost one-third of all student credit hours taken online. Other evidence collected annually since 2002 has demonstrated how online learning has become part of the mainstream in higher education. Large public universities and colleges are particularly likely to offer a large share of student credit hours online. 

One of the hopes of online higher education was that it would be a low-cost way to make college classes widely available to underserved and at-risk student populations. This hope has gone largely unfulfilled. Baum and McPherson:

Two rigorous large-scale studies of community college students by the Community College Research Center (CCRC) found lower course persistence and program completion among students in online classes. These studies found that students who take online classes do worse in subsequent courses and are more likely than others not only to fail to complete these courses, but also to drop out of school.Males, students with lower prior GPAs, and Black students have particular difficulty adjusting to online learning. The performance gaps that exist for these subgroups in face-to-face courses become even more pronounced in online courses.

According to the CCRC, the differences are even greater for developmental courses than for college-level courses. In a study of online developmental English courses, failure and withdrawal rates were more than twice as high as in face-to-face classes. Students who took developmental courses online were also significantly less likely to enroll in college-level gatekeeper math and English courses. Of students who did enroll in gatekeeper courses, those who had taken a developmental education course online were far less likely to pass than students who had taken it face-to-face.

Thus, many of the current successes of online learning in higher education are for students who are pre-screened for  high admissions standard, and or highly motivated, or both. As one example, Baum and McPherson write:

Georgia Tech’s widely cited computer science master’s degree program is getting very positive reviews and appears to be opening opportunities to new students, rather than diverting them from face-to-face programs. Since this is a graduate program, all of the students have already earned bachelor’s degrees and, in the case of Georgia Tech, passed rigorous admission standards. Evidence about success in MOOCs confirms the reality that students from higher-income and more-educated backgrounds are most likely to participate and succeed in these courses.

Greenstein offers some other examples:

Two potentially very promising trajectories are beginning to take shape. The first is the use of hybrid modalities: modalities that mix face-to-face and online instruction. Where implemented well, they appear to lower costs and improve student outcomes. This at least is the experience at the University of Central Florida (UCF). With undergraduates taking nearly one-third of their credits online, UCF shows the best course outcomes for students in hybrid courses (with outcomes for face-to-face and fully online falling behind in that order). A second very promising development is seen in adaptive technology platforms and courseware that integrate data science to make machine-assisted learning directly responsive to individual students’ needs and their progress and pace in mastering explicitly specified course competencies. By the mid-2010s, results were more rather than less promising for the technology demonstrating improved student outcomes for students from all demographic groups.

I\’ve heard enthusiasts for online education point out more than once that the possibilities for innovative technological progress in the form  of a human delivering a live lecture are somewhat limited. In contrast, it\’s easy to imagine all kinds of potential for improvement in online higher education. It remains true that most online higher-ed involves lecture-based presentations followed with online quizzes and tests. One can easily imagine over time that the interaction of an online class with a student will become more adaptive, flexible, and responsive. The methods of group participation online with other students and faculty will become more sophisticated. But after some years of watching online classes not cause a revolution in higher education, some hard questions are emerging.

1) It\’s easy to imagine online higher education getting better, but it\’s not going to happen easirly or on the cheap. It\’s clear at this point that just recording some classroom lectures and linking up students to a multiple-choice online test-bank will work for a highly motivated few, but not for the many. The investment needed for really good online courses may be large, and it may be ongoing. The old model of finding a professor who teaches a course well, and then having the professor record some lectures or write a textbook, isn\’t going to suffice. Instead, there will be a need for experts in computer programming, psychology, artificial intelligence, and more. A highly-evolved version of an online education class is also not a one-time project, but instead is going to require ongoing cycles of learning, and respecting differences across topics. Teaching statistics online may look very different from teaching a foreign language or writing or chemistry or economics. Before we\’re too quick to assume that online higher education will soon and quickly get a lot better,, it\’s important to remember that creating the highly evolved online education courses of the future isn\’t just a matter of jumping a few hurdles, but of overcoming a multidimensional obstacle course. It\’s not about a few incremental gains to the existing courses, but of evolution into a different kind of online experience that barely exists–or may not yet exist.

2) Who is going to make these costly, risky investments? Maybe it will be a few very well-to-do schools. It would be an interesting irony of those who attend huge-endowment highly-selective schools also ended up with access to much better online courses! Another possibility is that it will be schools with extremely large enrollments–probably larger than the enrollment of any specific campus.  It would be interesting to see if some conferences, like the Big 10, SEC, Pac-10 or the ACC could put together a team along these lines. It\’s not at all clear how community colleges, smaller schools, or  schools with lower levels of funding can afford to make large and ongoing investments in a dramatically better version of online education.  Thus, it\’s not at all clear that these online courses of the future will be focused on at-risk or nontraditional students.

3) There are times when the discussion of online education seems to be based on a vision of education as something that can be downloaded or viewed online by individuals in isolation, who then absorb the necessary information. But most education has traditionally happened in groups, and the social and emotional structures of the group may matter–at least for most learners most of the time.  Thus, a challenge is to make online learning into a genuinely shared experience. I\’ll give Baum and McPherson the last word:

Behind the successive would-be revolutions in the technology of delivering college education seems to lie a desire to minimize, if not eliminate, the need for messy, often inconvenient, and always costly human interaction in the college-going experience. This desire is particularly evident when the concern is for mass higher education. A purely automated delivery system for much of higher education would appear to be very cheap and efficient, and perhaps even higher quality than traditional higher education because everyone could be exposed to the best lecturers. Unfortunately for this dream, developments in psychology and learning theory over the last two decades have made ever more clear how central the social, emotional, and interactional dimensions of learning are.

Here\’s the Table of Contents for the issue, with links to the articles:

Challenges Facing the "Arab Development Model"

Here\’s a description of the Arab \”social contract\” and \”development model\” according to a recent report by Adel Abdellatif, Paola Pagliani, and Ellen Hsu, \”Leaving No One Behind Towards Inclusive Citizenship in Arab Countries\” (July 2019). It an Arab Human Development Report Research Paper, written for the Regional Bureau for Arab States in the UN Development Programme. They write:

The social contract that emerged from and continues to evolve as a result of contesting and bargaining stemmed from the state-building and formation after Arab states won their independence in the 1950s–1970s. The emergence of independent states was associated with a strong nationalistic sentiment and the idea that the state should be the provider and engine of social and economic development. Despite considerable variation across countries, which was affected by natural resources endowments, the dominant model of development from the 1950s onward was having limited political participation and civil and political liberties in exchange for material benefits such as services, subsidies and employment. The model was based on strong central states overseeing and driving economic and social priorities while implementing wide-scale policies for redistribution and equity. It rested on four main pillars:

  • Establishing a large bureaucracy to provide and deliver services.
  • Expanding security services and the army.
  • Setting up a large public sector of factories and companies.
  • Subsidizing basic foodstuffs and energy products.

In  keeping with the development model, a large share of total employment–often more than 20%–is in the public sector.

This model can claim some successes. For example, life expectancy at birth in Arab countries was about 55 years in 1970, below the world average of 58 years. Now, life expectancy in Arab countries is about 76 years, above the world average of 73 years. However, human development gains for countries in the Arab world have generally slipped back since 2010. The big drop in global oil prices back around 2014 has meant a reduction in resources for oil-exporting countries in the Middle East, and lower spillover buying power for the non-oil exporters in the region.

The emphasis of the report is that many of the countries in the region do not have \”inclusive citizenship. For example, females in Arab countries lag males by more than the usual average for emerging-market economies in areas like education and political representation. The report notes:

The greatest measurable disparities are economic: globally women’s income is 57 percent of men’s, but Arab women’s income is only 21 percent of Arab men’s. Unequal gendered division of labour—both in unpaid care and domestic work and in the labour market—is a major characteristic of gender economic inequality across the Arab region. Women’s participation in the formal labour market remains among the lowest globally because of both cultural norms and weak incentives

There are big gaps between rural and urban areas, and within urban areas, \”[i]n at least seven countries with data, more than half of the urban population lives in slums.\” The concerns just keep coming:

Unaccountable and unresponsive public institutions as well as perceived widespread corruption often drive exclusion and disenfranchisement for large segments of the population. … A substantial number of citizens believe that the institutions meant to take care of their needs are leaving them behind … Trust in elected bodies, those that should be in charge of redesigning the social contract, is particularly low. Lack of trust is also reflected in low electoral turnouts—below 50 percent in most countries … Perceptions of ineffective institutions seem confirmed by stagnating or narrowly based economic structures, high unemployment, young people facing difficult prospects to secure their future and uneven provision of social services and social protection nets. Unemployment, averaging 10 percent, almost double the world average, disproportionately affects young people, at 25 percent. … 84% of the population is affected by or at risk of water scarcity. The decline of arable land and the dependency on food imports expose the population to risks of food insecurity …

Part of what makes the report interesting is that it is from Regional Bureau for Arab States in the UN Development Programme. And the unmistakable theme is that the Arab development model and the associated social contract isn\’t working very well. Another part of what makes the report interesting is its hesitancy about suggesting alaternative policy directions.

Yes, there\’s some discussion about how subsidies for energy prices that end up mainly benefiting the well-to-do, who after all use more energy, could be converted to to support for the poor. This is a problem in a lot of countries (for an overview, see this IMF working paper). But the challenges facing the Arab development model aren\’t about recalibrating some subsidies. The problem is that a \”development model\” based on high public employment, along with lots of social services and subsidies, needs substantial numbers of firms in a solid underlying economy to provide jobs and tax revenues and growth. 

For some earlier posts on the economic outlook for the Middle East, see:

The Dispersion of High- and Low-Productivity Firms Within an Industry

If you think about an economy as fairly stable and static, you would expect that any two companies within an industry would be fairly close in terms of productivity. After all, if Company A and Company B are selling similar products, and A has much higher productivity than B, it should drive B out of business. Thus, one might expect that at the end of this process, the competitors we observe within an industry in the real world should be fairly close in productivity level.

However, this expectation is dramatically wrong. Within an industry, it is a standard pattern to find a wide dispersion of productivity across firms in the industry. Academic researchers have been familiar with this pattern for at least 15 years. But now (pulse rate accelerates) there is systematic time series data across industries from 1997-2015!  \”The Dispersion Statistics on Productivity (DiSP) is a joint experimental data product from the U.S. Bureau of Labor Statistics and the U.S. Census Bureau. The DiSP provide statistics on within-industry dispersion in productivity.\”

For example, here\’s a figure from Cheryl Grim of the US Census Bureau. The bar graphs show that if you take a firm in the 75th percentile of the shoe or the cement industry and compare it with a firm in the 25th percentile of the shoe or cement industry, the firm in the 75th percentile will be about 150% as productive. In the computer industry, a firm in the 75th percentile is 400% more productive than a firm in the 25th percentile.

what-drives-productivity-growth-figure-1

The existence of such differences in productivity across industry have been known for some time.   Cindy Cunningham, Lucia Foster, Cheryl Grimm John Haltiwanger, Sabrina Wulff Pabilonia, Jay Stewart, and Zoltan Wolf explain in \”Dispersion in Dispersion: Measuring Establishment-Level Differences in Productivity\” (Center for Economic Studies Working Paper CES 18-25R, September 2019).

They point out that research by Chad Syverson back in 2004, looking at data from manufacturing industries in 1977, found that firms in the 90th percentile of a certain industry were about four times as productive as firms in the 10th percentile. In the more recent data: \”Illustrating the properties of the new data product, we find large within-industry dispersion in labor productivity: establishments at the 75th percentile are about 2.4 times as productive as those at the 25th percentile on average.

Why do such differences exist? The reasons are obvious enough, as Grim explains?

Producers within industries differ in many ways. They produce different products of varying quality and have different customers and markets. They use different technology and business practices to combine different amounts of materials and equipment to produce their products. Some businesses are also larger and/or older than other businesses. Their ability to adjust their scale and mix of operations may vary due to these differences. Experimenting with new products and processes can also contribute to productivity differences. Businesses that have successfully adopted new technologies are likely to be more “productive” (as measured by these differences in revenue per hour) compared to businesses that have not yet adopted such technologies. All of these factors can contribute to enormous variations in this measure of business performance.

The fact that firms in the same industry be so different in productivity levels, and that these differences don\’t seem to fade away, has a number of interesting implications.

First, the pattern suggests that productivity growth doesn\’t always mean cutting-edge gains; indeed there is enormous potential for economic growth if the firms now lagging in productivity can be brought up to speed, perhaps by merging with higher productivity firms. In addition, one way that productivity growth happens for the economy as a whole is when high-productivity firms put low-productivity firms out of business.

Second, the persistence of these gaps suggests that some firms are protected from competition. For example, cement is not very transportable, and so competition in the cement industry is often limited to local firms. The potential reason why productivity differences may persist in other firms is worth considering.

Third, there seems to be some evidence that productivity diffusion is widening, as \”superstar\” firms in various industries pull further ahead. Indeed, this may be an important factor contributing to growth of inequality of wages, because workers and managers at high-productivity firms are typically much better-paid than those at low-productivity firms. 

Are CLOs the New CDOs?

CDOs, or \”collateralized debt obligations,\” were at the heart of what broke down in the US financial system and helped put the \”Great\” in the \”Great Recession.\” Is there another financial instrument out there that raises similar concerns? CLOs, or \”collateralized loan obligations,\” have a similar structure and have now reached a similar size to the CDOs circa 2008.

 How much should we be worried? As I\’ve noted in past discussions of the subject, several Fed officials including  Lael Brainerd of the Fed Board of Governors and Robert Kaplan of the Federal Reserve Bank of Dallas (who will rotate on to the membership of the Federal Open Market Committee in 2020) have raised concerns.  Sirio Aramonte and Fernando Avalos offer a nice short discussion of this comparison in \”Structured finance then and now: a comparison of CDOs and CLOs,\” which appears in the BIS Quarterly Review (September 2019, pp. 11-14). They write: \”The rapid growth of leveraged finance and CLOs has parallels with developments in the US subprime mortgage market and CDOs during the run-up to the GFC. We examine the CLO market in light of that earlier experience.\”

Here\’s some backstory. The collateralized debt obligation of concern back in 2007 were a set of financial securities that were based on pools of subprime mortgages. There\’s nothing wrong with collecting mortgages into a pool, packaging them into a security, and then reselling them to investors like insurance companies, pension funds, hedge funds, and banks.

But the problem with creating a financial security based on subprime mortgages was that–by the definition of \”subprime\”–a relatively high percentage of these mortgage were going to default, so a financial security based on these subprime mortgages would be fairly risky. For example, banks would not be allowed by regulators to hold such securities. However, some financial wizardry solved that problem.  The CDOs were divided up into sections, called \”tranches,\” with some of the tranches being very risky and some being very safe. For example, if losses on the underlying subprime mortgages were in the range of 0-10%, then all of those losses would fall on one set of investors in the highest-risk tranche. If losses on the underlying subprime mortgages fell in the range of 10-20%, then those losses would fall entirely on another set of investors in the next highest-risk tranche. With several of these tiers built into place, so that any losses would be concentrates on a subset of investors, the other tranches of the CDO appeared to be very safe: indeed, those tranches were rated AAA and banks were allowed to hold them.

The current wave of collateralized loan obligations are also financial securities based on pools of debt–but in this case, the debts are corporate loans rather than subprime mortgages. Again, there\’;s nothing wrong with collecting debt into a pool, packaging it into a security, and reselling it to investors. This kind of corporate debt is called  \”leveraged loan.\” As Aramonte and Avalos write:

CDOs and CLOs are asset-backed securities (ABS) that invest in pools of illiquid assets and convert them into marketable securities. They are structured in tranches, each with claims of different seniority over the cash flows from the underlying assets. The most junior or so-called equity tranche is often unrated and earns the highest yields, but is the first to absorb credit losses. The most senior tranche, which is often rated AAA, receives the lowest yields but is the last to absorb losses. In between are mezzanine tranches, usually rated from BB to AA, which start to absorb credit losses once the equity tranche is wiped out. The larger the share of junior tranches in the capital structure of the pool, the more protected the senior tranche (for a given level of portfolio credit risk).

The market for collateralized loan obligations has grown quickly. For comparison, the size of the total market for CDOs in 2007 $1.2 trillion-$2.4 trillion, and the size of the total market for CLOs at present is $1.4 trillion to $2.0 trillion. In addition, investors (facing low interest rates elsewhere) are eager to buy CLOs–which means that the credit standards for such loans have deteriorated.  Aramonte and Avalos write: 

For both CDOs and CLOs, strong investor demand led to a deterioration in underwriting standards. For example, US subprime mortgages without full documentation of borrowers’ income increased from about 28% in 2001 to more than 50% in 2006. Likewise, leveraged loans without maintenance covenants increased from 20% in 2012 to 80% in 2018. In recent years, the share of low-rated (B–) leveraged loans in CLOs has nearly doubled to 18%, and the debt-to-earnings ratio of leveraged borrowers has risen steadily. Weak underwriting standards can reduce the likelihood of defaults in the short run but increase the potential credit losses when a default eventually occurs. 

Here are a couple of images: one showing the rise in the leveraged loan market, the other showing that borrowers with more debt have an increasing share of the market and that \”covenant-lite\” loans with fewer protections for investors have been on the rise. 

Thus, the concern is over a scenario where the economy gets a negative shock. The risk of leveraged loans rises. Some investors start trying to sell off those loans, but in a situation where everyone is trying to sell, the prices are going to be low–which encourages even more investors to try to sell. Banks see the value of their holdings of CLOs is falling, which raises concerns for bank regulators. Some banks also find that, although they had not quite realized it, they are connected to that they have connection to these other parts of the financial industry through legal and reputational ties, or because they have open lines of credit outstanding to these other companies. Ultimately, companies find it much harder to borrow, and banks become less willing to lend to consumers, too. Say it all in one long breath, and it\’s a recipe for recession. 
But while the parallels from CDOs to CLOs are are suggestive, and reason for a moderate degree of concern, there are also meaningful differences. 
The CDOs of 2007 were all based on housing, and thus were all vulnerable to a common shock. The CLOs of 2019 are more diversified because they are spread across industries, and not all industries are likely to become vulnerable in the same way at the same time. 
The CDOs of 2007 became entangled in other types of complexity. For example, the financial wizards started off with subprime mortgages and then created CDOs with tranches. But then they took tranches from separate CDOs and combined the tranches into a new CDO–sometimes called a CDO-squared–with tranches of its own. CDOs also became entangled with a market for \”credit default swaps,\” a way of buying insurance against a decline in your CDO tranche. Selling that \”credit default swap\” insurance was a big part of what drove the insurance company AIG into bankruptcy and a federal bailout. The financial structure of the recent wave of CLOs has not (so far!) been complicated with these kinds of additional complications. If stress does occur in the CLO market, it will be a lot easier to identify the risks and who is facing them. 
Yet another issue is that back in 2008, banks were often investing in CDOs through another bit of financial wizardry called a \”special-interest vehicle,\” which was technically separate from the bank and thus off the bank\’s balance sheet, but where the bank would suffer if losses occurred. But banks that own CLOs are owning them directly and clear, not through a veiled financial transaction. Again, if risks occur, those risks should be much more clear. 
As Aramonte and Avalos, it also seems that CLOs are less likely to be financed by short-term borrrowing, and less likely to serve a collateral for short-term borrowing, as well. Less of a connection to short-term financial markets means that the risk of a \”run\” on the asset is reduced. 
Bottom line: CLOs aren\’t the new CDOs, at least not yet. But perhaps cast a weather eye in their direction, now and then, just in case.

Trade: The Perils of Overstating Benefits and Costs

A vibrant and healthy economy will be continually in transition, as new technologies arise, leading to new production processes and new products, and consumer preferences shift. In addition, some companies will be managed better or have more motivated and skilled workers, while others will not. Some companies will build reputation and invest in organizational capabilities, and others will not.  International trade is of course one reason for the process of transition.

But international trade isn\’t the main driver of economic change–and especially not in a country like the United States with a huge internal market. In the world economy, exports and imports–which at the global level are equal to each other because exports from one country must be imports for another country–are both about 28% of GDP. For the US economy, imports are about 15% of GDP and exports are 12%, which is to say that they are roughly half the share of GDP that is average for other countries in the world.

However, supporters of international trade have some tendency to oversell its benefits, while opponents of international trade have some tendency to oversell its costs. This tacit agreement-to-overstate helps both sides avoid a discussion of the central role of domestic policies both in providing a basis for growth and for smoothing the ongoing process of adjustment.

Ernesto Zedillo Ponce de León makes this point in the course of a broader essay on \”The Past Decade
and the Future of Globalization,\” which in a collection of essays called Towards a New Enlightenment? A Transcendent Decade (2018, pp. 247-265). It was published by Open Mind, which in turn is a nonprofit run by the Spanish bank BBVA. He writes (boldface type is added by me):

The crisis and its economic and political sequels have exacerbated a problem for globalization that has existed throughout: to blame it for any number of things that have gone wrong in the world and to dismiss the benefits that it has helped to bring about. The backlash against contemporary globalization seems to be approaching an all-time high in many places including, the United States.

Part of the backlash may be attributable to the simple fact that world GDP growth and nominal wage growth—even accounting for the healthier rates of 2017 and 2018—are still below what they were in most advanced and emerging market countries in the five years prior to the 2008–09 crisis. It is also nurtured by the increase in income inequality and the so-called middle-class squeeze in the rich countries, along with the anxiety caused by automation, which is bound to affect the structure of their labor markets.
Since the Stolper-Samuelson formulation of the Heckscher-Ohlin theory, the alteration of factor prices and therefore income distribution as a consequence of international trade and of labor and capital mobility has been an indispensable qualification acknowledged even by the most recalcitrant proponents of open markets. Recommendations of trade liberalization must always be accompanied by other policy prescriptions if the distributional effects of open markets deemed undesirable are to be mitigated or even fully compensated. This is the usual posture in the economics profession. Curiously, however, those members of the profession who happen to be skeptics or even outright opponents of free trade, and in general of globalization, persistently “rediscover” Stolper-Samuelson and its variants as if this body of knowledge had never been part of the toolkit provided by economics.

It has not helped that sometimes, obviously unwarrantedly, trade is proposed as an all-powerful instrument for growth and development irrespective of other conditions in the economy and politics of countries. Indeed, global trade can promote, and actually has greatly fostered, global growth. But global trade cannot promote growth for all in the absence  of other policies. 

The simultaneous exaggeration of the consequences of free trade and the understatement—or even total absence of consideration—of the critical importance of other policies that need to be in place to prevent abominable economic and social outcomes, constitute a double-edged sword. It has been an expedient used by politicians to pursue the opening of markets when this has fit their convenience or even their convictions. But it reverts, sometimes dramatically, against the case for open markets when those abominable outcomes—caused or not by globalization—become intolerable for societies. When this happens, strong supporters of free trade, conducted in a rules-based system, are charged unduly with the burden of proof about the advantages of open trade in the face of economic and social outcomes that all of us profoundly dislike, such as worsening income distribution, wage stagnation, and the marginalization of significant sectors of the populations from the benefits of globalization, all of which has certainly happened in some parts of the world, although not necessarily as a consequence of trade liberalization.

Open markets, sold in good times as a silver bullet of prosperity, become the culprit of all ills when things go sour economically and politically. Politicians of all persuasions hurry to point fingers toward external forces, first and foremost to open trade, to explain the causes of adversity, rather than engaging in contrition about the domestic policy mistakes or omissions underlying those unwanted ills. Blaming the various dimensions of globalization—trade, finance, and migration—for phenomena such as insufficient GDP growth, stagnant wages, inequality, and unemployment always seems to be preferable for governments, rather than admitting their failure to deliver on their own responsibilities.
Unfortunately, even otherwise reasonable political leaders sometimes fall into the temptation of playing with the double-edged sword, a trick that may pay off politically short term but also risks having disastrous consequences. Overselling trade and understating other challenges that convey tough political choices is not only deceitful to citizens but also politically risky as it is a posture that can easily backfire against those using it.

The most extreme cases of such a deflection of responsibility are found among populist politicians. More than any other kind, the populist politician has a marked tendency to blame others for his or her country’s problems and failings. Foreigners, who invest in, export to, or migrate to their country, are the populist’s favorite targets to explain almost every domestic problem. That is why restrictions, including draconian ones, on trade, investment, and migration are an essential part of the populist’s policy arsenal. The populist praises isolationism and avoids international engagement. The “full package” of populism frequently includes anti-market economics, xenophobic and autarkic nationalism, contempt for multilateral rules and institutions, and authoritarian politics. … 

Crucially, for globalization to deliver to its full potential, all governments should take more seriously the essential insight provided by economics that open markets need to be accompanied by policies that make their impact less disruptive and more beneficially inclusive for the population at large.

Advocates of globalization should also be more effective in contending with the conundrum posed by the fact that it has become pervasive, even for serious academics, to postulate almost mechanically a causal relationship between open markets and many social and economic ills while addressing only lightly at best, or simply ignoring, the determinant influence of domestic policies in such outcomes.

Blaming is easy, and blaming foreigners is easiest of all. Proposing thoughtful domestic policy with a fair-minded accounting of benefits and costs is hard. 

Employment Patterns for Older Americans

Americans are living longer, and also are more likely to be working in their 60s and 70s. The Congressional Budget Office provides an overview of some patterns in \”Employment of People Ages 55 to 79\” (September 2019). CBO writes:

\”Between 1970 and the mid-1990s, the share of people ages 55 to 79 who were employed—that is, their employment-to-population ratio—dropped, owing particularly to men’s experiences. In contrast, the increase that began in the mid-1990s and continued until the 2007–2009 recession resulted from increases in the employment of both men and women. During that recession, the employment-to-population ratio for the age group overall fell, and the participation rate stabilized—with the gap indicating increased difficulty in finding work. The ensuing gradual convergence of the two measures reflects the slow recovery from the recession. The fall in the employment of men before the mid-1990s, research suggests, resulted partly from an increase in the generosity of Social Security benefits and pension plans, the introduction of Medicare, a decline in the opportunities for less-skilled workers, and the growth of the disability insurance system. Although those factors probably also affected women, the influence was not enough to offset the large increase in the employment of women of the baby-boom generation relative to those of the previous generation, most of whom were not employed.\”

Here are some underlying factors may help in understanding this pattern. If one breaks down the work of the elderly by male/female and by age groups, then it becomes clear that while men ages 55-61 are not more likely to be working, the other groups are. An underlying reason here is that women who are now ages 55 and older were more likely to be in the (paid) workforce earlier in life than women who were 55 and older back in 1990. Thus, part of the rise in work of older women just reflects more work earlier in life, carried over to later in life.  But

One possible reason for people working older in life can be linked to rising levels of education: that is, people with more education are more likely to have jobs that are better paid and involve less physical stress, and thus more likely to keep working. However, it\’s interesting that the rise in employment share for males ages 62-79 is about the same in percentage point terms for different levels of education; for females, the increase in employment share for this age group is substantially  higher for those with higher levels of education.

There\’s an interesting set of questions about whether working longer in life should be viewed a good thing. If the increase is due to those have jobs that they find interesting or rewarding and who want to continue working, then that seems positive. However, it\’s tempting to feel that if people who had their jobs but work longer primarily just because they need or want the money, and they would otherwise be financially insecure, then working longer in life is potentially more troublesome.

From this perspective, one might argue that it would be more troubling if the rise in employment among the elderly was concentrated in those with lower education levels –who on average may have less desirable jobs. But if the rise in employment among the elderly is either distributed evenly across education groups (males) or happens more among the more-educated (females), then it\’s harder to make the case that the bulk of this higher work among the elderly is happening because of low-skilled workers taking crappy jobs under financial pressure.

It\’s also true that the share of older people reporting that their health is \”very good/excellent\” has been rising in the last two decades, and the share reporting only \”good\” has been rising too. Conversely, the share reporting that their health is \”fair/poor\” has been falling for both males and females. Again, this pattern suggests that some of the additional work of the elderly is happening because a greater share of the elderly feel more able to do it.

One other change worth mentioning is that Social Security rules have evolved in a way that allows people to keep working after 65 and still receive at least some benefits. The CBO explains:

\”Changes in Social Security policy that relate to the retirement earnings test (RET) have made working in one’s 60s more attractive. The RET specifies an age, an earnings threshold, and a withholding rate: If a Social Security claimant is younger than that age and has earnings higher than the specified threshold, some or all of his or her retirement benefits are temporarily withheld. Those withheld benefits are at least partially credited back in later years. Over time, the government has gradually made the RET less stringent by raising earnings thresholds, lowering withholding rates, and exempting certain age groups. For instance, in the early 1980s, the oldest age at which earnings were subject to the RET was reduced from 71 to 69, and in 2000, that age was further lowered to the FRA. (In 2000, the FRA was 65, and it rose to 66 by 2018.) Lowering the oldest age at which earnings are subject to the RET allowed more people to claim their full Social Security benefits while they continued working.\”

The question of how long in life life someone \”should\” work seems to me an intensely personal decision, but a decision that will be influenced by health, job options, pay, Social Security rules, rules about accessing retirement accounts and pensions, and more. But broadly speaking, it seems right to me that as Americans live longer and healthier, a larger share of them should be remaining in the workforce. The pattern of more elderly people working is also good news for the financial health of Social Security and the broader health of the US economy.

The Charitable Contributions Deduction and Its Historical Evolution

Each year, the Analytical Perspectives volume produced  with the proposed US Budget includes a table of \”tax expenditures,\” which is an estimate of how much various tax deductions, exemptions, and credits reduce federal tax revenues. For example , in 2019 the tax deduction for charitable contributions to education reduced federal tax revenue by $4.1 billion, the parallel deduction for charitable contributions to  health reduced federal tax revenue by $3.9 billion, and the deduction for all other charitable contributions reduced federal tax revenue by $36.6 billion.

But why was a deduction for charitable contributions first included in the tax code in 1917? And how has it evolved since then? Nicolas J. Duquette tells the story in  \”Founders’ Fortunes and Philanthropy: A History of the U.S. Charitable-Contribution Deduction\” (Business History Review, Autumn 2019,  93: 553–584, not freely available online, but many readers will have access through library subscriptions).

As Duquette points out, the notion of very rich business-people–like Rockefeller and Carnegie– leaving their fortunes to charity was already in place when the federal income tax was enacted in 1913 and when the deduction for charitable contributions was added in 1917.  However, there was concern that as the income tax ramped up during World War I, charitable contributions might plummet, and then the government would need to take on the tasks being shouldered by charitable institutions. Duquette writes (footnotes omitted):

In the first years of the income tax, less than 1 percent of households were subject to it, and it had rates no higher than 15 percent. Quickly, however, the tax became an important  revenue instrument; in 1917 the top rate was abruptly raised to 67 percent to pay for World War I. The Congress added a deduction for gifts to charitable organizations to the bill implementing these high rates, not to encourage the wealthy to give their fortunes away (which the most influential and richest men were already doing) but to not discourage their continued giving in light of a larger tax bill. Senator Henry F. Hollis of New Hampshire—who was also a regent of the nonprofit Smithsonian Institution—proposed that filers be permitted to exclude from taxable income gifts to “corporations or associations organized and operated exclusively for religious, charitable, scientific, or educational purposes, or to societies for the prevention of cruelty to children or animals.” The senator argued for the change not because he thought it was wise public policy to change the “price” of charitable contributions via a subsidy but because of worries that reduced after-tax income of the very rich would end their philanthropy, shifting burdens the philanthropists had been carrying onto the backs of a wartime government. … Hollis’s amendment to the War Revenue Act of 1917 was accepted unanimously and without controversy.

Notice the implication here that charitable contributions can reasonably be viewed as a one-for-one offset for government spending. The next inflection point for the charitable contributions deduction happens after World War II. The top income tax rates have risen very high. As a result, it was literally cheaper to give money to charity than to pay taxes–at least for that select group of taxpayers with very high income levels in the top tax brackets, and especially business leaders who held much of their wealth the form of corporate stock that would incur large capital gains taxes if sold. Duquette writes:

For the very rich, especially entrepreneurs like Carnegie and Rockefeller who grew their wealth through business expansion, charitable gifts of corporate stock avoided multiple taxes. Most obviously, their giving reduced their income tax, but under the deduction’s rules such gifts additionally avoided capital gains taxation. Furthermore, wealth given away was wealth not held at death, so giving during life also reduced the size of the donor’s taxable estate. When the U.S. Congress raised income tax rates to pay for the war and defense costs of the mid-twentieth century, it created a situation where many of the richest American families found that by giving their fortunes to a foundation they avoided more in taxation than they would have received in proceeds for selling shares of stock. Foundations flourished. … [F]or several years in the middle of the twentieth century, it was quite possible for stock donations to be strictly better than sales of shares for households with high incomes and high capital gains.

Here\’s an illustrative figure from Duquette. He explains:

Figure 1 plots the tax price of donating stock for various high-income tax brackets and capital gains ratios over the period 1917–2017. During World War I and for several years following World War II, wealthy industrialists with large unrealized capital gains facing the very highest  tax rates were better off donating shares than selling them, even if  they had no interest in philanthropy. Taxpayers with lower θ [a measure of the degree of capital gains available to the potential donor] or with taxable incomes not quite in the highest tax bracket may not have been literally better off making a donation in each of these years, but they nevertheless surrendered very little after-tax income by making a donation relative to selling their stock. Note, too, that this figure presents only tax savings relative to federal income and capital gains taxation; many donors quite likely received additional savings in the form of charitable-contribution deductions from state income taxation and by reducing their taxable estates.

The surge of charitable giving by the wealthy in the 1950s and into the 1960s, in response to these tax incentives, led to two counterreactions.

One was that those with high incomes began to use charitable foundations as a way of preserving family wealth and power.

Before 1969, there were few checks on the governance of family foundations or their handling of shareholder power. To entrepreneurs who had built large enterprises from scratch, the foundations presented an appealing way to have the benefit of selling shares without losing control of the business. Corporate shares sold to strangers could not be voted in line with the seller’s preferences; shares given to heirs and the heirs of heirs could lead to familial factionalism and, eventually, sales of shares by the least committed cousins; but a family foundation holding shares of stock and voting those shares as a bloc could maintain family control of a firm, however much the siblings and cousins may have squabbled at the foundation’s board meetings. Even better, family foundations could pay family members generous salaries to direct and manage the foundation, allowing them to continue to benefit from the profits redounding to the foundation’s stockholding. Although many industrialists gave directly to specific charities, the foundation vehicle had the additional benefit of being able to leave corporate control to one’s heirs through a single untaxed legal entity. Without the structure of a foundation, meeting the costs of the estate tax might force a family to sell shares below the 51 percent level of corporate control, or heirs might not coordinate their share voting as a bloc. …

A 1982 survey found that half of the largest foundations established from 1940 to 1969 were begun with a gift of stock large enough to control a firm and that founders rated tax motivations as an important factor. This was true for few foundations established before 1939, when the wealthy would not have been better off giving than selling their shareholdings. … Some corporate foundations were demonstrated to have made loans at below-market rates or to have made other suspicious business deals with their sponsoring firms.44 Private foundations further extended the insider control of corporations through maneuvering to conceal financial information or consolidate votes during shareholder  elections.45 Of the thirteen largest foundations that accounted for a large share of all foundation assets, twelve were controlled by a tight-knit and highly interlocked “power elite,” undermining the case that tax benefits to foundations served the public.

These use of charitable foundations became something of a scandal, and were highly restricted or outlawed by the Tax Reform Act of 1969.

The other counterreaction, related to the first, was a growing awareness that the deduction for charitable contributions was really a tax break for the rich. Taxpayers have a choice when filling out their taxes: they can take the \”standard deduction,\” or they can itemize their deductions. The usual  pattern in recent decades has been that only about one-third of tax returns itemize deductions, and those tend to be people with higher incomes (who also have a lot of other deductions large enough to make itemizing worthwhile).  In addition, a person in the highest tax brackets saves more money from an additional $1 of tax deductions than a person in lower tax brackets.

Another important factor is that by the 1970s, the role of government in providing education, health, and support for the poor and elderly had increased quite a lot since the original introduction of the deduction for charitable contributions in 1917. Taking these factors and other together, Duquette explains:

The result was a shift from thelong-standing perspective of policymakers that the deduction protected philanthropic contributions to social goods and saved the Treasury money to a more skeptical and economistic perspective that the deduction was an implicit cost that must be justified by its benefits. …

In particular, Martin Feldstein’s groundbreaking econometric studies of the deduction’s effectiveness, supported by Rockefeller III, reframed the deduction as  a “tax expenditure.” Instead of asking how much less the government needed to spend thanks to philanthropy, Feldstein asked how much the deduction cost the Treasury relative to the additional giving it induced. This tax price (described above) could be quantified relative to “treasury neutrality”—that is, whether it induced more dollars in giving than the federal government lost in tax revenue for having it. Feldstein’s answer was reassuring. He found that the deduction encouraged more giving than it cost in uncollected taxes. But his work elided the long-standing distinction between the philanthropy of the very rich and the mere giving of ordinary people.

In the last few decades, the role of the deduction for charitable contributions has been much diminished. Top marginal tax rates were cut in the 1980s, making the deduction less attractive. \”Nevertheless, with reduced tax incentives, giving by the rich fell sharply. Households in the top 0.1 percent of the income distribution reduced the share of income they donated by half from 1980 to 1990, concurrent with the reduced value of the deduction over that period. In the aggregate, charitable giving overall fell from just over 2 percent of GDP in 1971 to its lowest postwar level, 1.66
percent of GDP, in 1996.\”

In addition, the 2017 Tax Cut and Jobs Act increased the standard deduction, and the forecasts are that the share of taxpayers who itemize deductions will fall from about one-third down to one-tenth. 

In short, the deduction for charitable contributions is going be be used by a smaller share of mainly high-income taxpayers, and with reduced incentives for using it. A large share of charitable giving–say, what the average person donates to community projects, charities, or their church–doesn\’t receive any benefit from the charitable contributions deduction. Many of the large charitable gifts no longer provide direct services, as government has taken over those tasks.

It seems to me that there is still a sense that the deduction for charitable contributions provides an incentive for big donations from those with high having incomes and wealth–an incentive that goes beyond good publicity and naming rights. There may also be some advantage in having nonprofits and charities rally support among big donors, rather than relying on the political process and government grants. But it also seems to me that the public policy case for a deduction for charitable contribution is as weak as it has ever been in the century since it was first put into place.

Save the Whales, Reduce Atmospheric Carbon

When it comes to holding down the concentrations of atmospheric carbon, I\’m willing to consider all sorts of possibilities, but I confess I had never considered whales. Ralph Chami, Thomas Cosimano, Connel Fullenkamp, and Sena Oztosun have written \”Nature’s Solution to Climate Change: A strategy to protect whales can limit greenhouse gases and global warming\” (Finance & Development, September 2019, related podcast is here).

Here\’s how they describe the \”whale pump\” and the \”whale conveyor belt\”: 

Wherever whales, the largest living things on earth, are found, so are populations of some of the smallest, phytoplankton. These microscopic creatures not only contribute at least 50 percent of all oxygen to our atmosphere, they do so by capturing about 37 billion metric tons of CO2, an estimated 40 percent of all CO produced. To put things in perspective, we calculate that this is equivalent to the amount of CO captured by 1.70 trillion trees—four Amazon forests’ worth … More phytoplankton means more carbon capture.

In recent years, scientists have discovered that whales have a multiplier effect of increasing phytoplankton production wherever they go. How? It turns out that whales’ waste products contain exactly the substances—notably iron and nitrogen—phytoplankton need to grow. Whales bring minerals up to the ocean surface through their vertical movement, called the “whale pump,” and through their migration across oceans, called the “whale conveyor belt.” Preliminary modeling and estimates indicate that this fertilizing activity adds significantly to phytoplankton growth in the areas whales frequent. …

What\’s the potential effect if whales and their environment was protected, so the total number of whales increased?

If whales were allowed to return to their pre-whaling number of 4 to 5 million—from slightly more than 1.3 million today—it could add significantly to the amount of phytoplankton in the oceans and to the carbon they capture each year. At a minimum, even a 1 percent increase in phytoplankton productivity thanks to whale activity would capture hundreds of millions of tons of additional COa year, equivalent to the sudden appearance of 2 billion mature trees. …

We estimate the value of an average great whale by determining today’s value of the carbon sequestered by a whale over its lifetime, using scientific estimates of the amount whales contribute to carbon sequestration, the market price of carbon dioxide, and the financial technique of discounting. To this, we also add today’s value of the whale’s other economic contributions, such as fishery enhancement and ecotourism, over its lifetime. Our conservative estimates put the value of the average great whale, based on its various activities, at more than $2 million, and easily over $1 trillion for the current stock of great whales. …

I\’ll leave for another day the question of what international rules or cross-country payments might be needed to help whale populations rebuild. I\’ll also leave for another day the nagging thought from the that cold rational section in the back of my brain that if a substantial increase in phytoplankton is a useful way to hold down atmospheric carbon, whales are surely not the only way to accomplish this goal. But it\’s a useful reminder that limiting the rise of carbon concentrations in the atmosphere is an issue that can be addressed from many directions.

A Funny Thing Happened on the Way to the Interest Rate Cut

Last week, the Federal Open Market Committee announced that it would \”lower the target range for the federal funds rate to 1-3/4 to 2 percent.\” The previous target range had been from 2 to 2-1/4 percent.

As usual, the change raises further questions. Less than a  year ago, a common belief was that the Fed viewed \”normalized\” interest rates as being in the target range of 3 to 3-1/4%. Starting in 2015, the Fed had been steadily raising the target zone for the federal funds interest rate, reaching as high as a range of 2-1/4 to 2-1/2% in December 2018. But then in July 2019 there was a cut of 1/4%, now there has been another cut of 1/4% and a number of commenters are suggesting that further cuts are likely.

So should this succession of interest rate cuts  be viewed as detour on the road to the Fed\’s desire to reach a target range for the federal funds interest of 3 to 3-1/4%? Back in the mid-1990s, for example, Fed Chairman Alan Greenspan famously held off on raising the federal funds interest rate for some years because he believed (as it turn out, correctly) that the economic expansion of that time was not yet running into an danger of higher inflation or other macroeconomic limits. 

Or on the other hand, should the fall in interest rates be considered a  prelude to larger cuts in the next year or two. For example, President Trump has advocated via Twitter that the Fed should be pushing interest rates down to zero percent or less:

Here, I\’ll duck making predictions about what will happen next, and focus instead on a potentially not-so-funny thing that happened on the way to the interest rate cuts. What happened was that when the Fed wanted to reduce interest rates, one of the two main tools that the Fed now uses had its interest rate soar upward instead–and required a large infusion of funds from the Fed. Some background will be helpful here.

When the Fed decided to start raising the federal funds interest rate in 2015, it also needed to use new policy tools to do so. The old policy tools from before the Great Recession relied on the fact that the reserves that banks held with the Federal Reserve system were close to the minimum required level. For example, in mid-2008, banks were required to hold about $40 billion in reserves with the Fed, and they held roughly an extra $2 billion above that amount. But today, after years of quantitative easing, banks are required to hold about $140 billion dollars of reserves with the Fed, but instead are holding about $1.5 trillion in total reserves.

With these very high levels of bank reserves, the old-style monetary policies you may remember from a long-ago intro econ class–open market operations, changing the reserve requirement, or changing the discount rates–won\’t work any more. So the Fed invented two new ways of conducting monetary policy. For an overview of the change, Jane E. Ihrig, Ellen E. Meade, and Gretchen C. Weinbach discuss \”Rewriting Monetary Policy 101: What’s the Fed’s Preferred Post-Crisis Approach to Raising Interest Rates?\” in the Fall 2015 issue of the Journal of Economic Perspectives.

One is to change the interest rate that the Federal Reserve pays on excess reserves held by banks. To imagine how this works, say that a bank can get, say, a 2% return from the Fed for its excess reserves. Then the Fed cuts this interest rate to 1.8%. The lower return on its reserve holdings should encourage the bank to do some additional lending. 

However, the Fed in 2015 couldn\’t be sure if moving the interest rate on excess reserves would give it enough control over the federal funds interest rate that it wishes to target. Thus, the Fed stated that \”it intended to use an overnight reverse repurchase agreement (ON RRP) facility as needed as a supplementary policy tool to help control the federal funds rate … The Committee stated that it would use an ON RRP facility only to the extent necessary and will phase it out when it is no longer needed to help control the funds rate.\”

So what is a repurchase agreement, or a reverse repurchase agreement, and how is the Fed using them? A repo agreement is a way for parties that are  holding cash to lend it out, overnight, to parties that would like to borrow that cash overnight. However, the way it contractually works is that one set of firms sign an agreement to buy an asset from the other firm, like a US Treasury bond, and then the first firm agrees to repurchase that asset the next day for a slightly higher price. Here\’s a readable overview of the repo market from Bloomberg.

The repo market should work in tandem with the interest rate on excess reserves. Both of them involve something banks could do with their cash reserves: that is, banks could either leave the reserves with the Fed, or lend those cash reserves in the repo market. In both cases, the interest rate on excess reserves and the repo interest rate are the rates for safe, short-term lending, which is what the Fed us using to control the federal funds interest rate market for safe and short-term lending.

The story of what went wrong  last week can be told in two figures. When the Fed was announcing that it was going to reduce interest rates, the interest rates in the market for repurchase agreements suddenly soared instead. This interest rate has been hovering at a little above 2%, just about where the Fed wanted it. But when the Fed announced that it wanted a lower federal funds interest rate, the repo rate spiked.

Meanwhile, the shaded areas in this second figure show the target zone for the federal funds interest rate. You can see by the blue line that the actual or \”effective\” federal funds rate was in the desired zone in late 2018, then rises in December 2018 when the Fed used the interest rate on excess reserves as a tool to raise interest rates. When the Fed again adjusted the interest rate on excess rate on excess reserves to cut interest rates in June 2019, the effective federal funds rate drops. At the extreme right of the figure, you can see the tiny slice of the new, lower target zone for the federal funds interest rate that the Fed adopted last week. But notice that before the effective federal funds interest rate (blue line) falls, it first spikes upward, in the wrong direction. 

In short, the interest rate in the overnight repo market spiked, and for a day or two, the Fed was unable to keep the federal funds interest rate in the desired zone.

In one way, this is no big deal. The Fed did get the interest rate back under control. It responded to the spike in the overnight repo rate by offering to provide that market with up to $75 billion in additional lending, per day, for the next few weeks. With this spigot of cash available for borrowing, there\’s no reason for this interest rate to spike again.

But at a deeper level, there\’s some reason for concern. The Fed has been hoping to use the interest rate on excess reserves as its main monetary policy tool, but last week, that tool wasn\’t enough. In hindsight, financial analysts can point to this or that reason why the overnight rate suddenly spiked to 10%. A common story seems to be that there was a rie isn demand for short-term cash from companies making tax payments, but a surge in Treasury borrowing had termporarily soaked up a lot of the available cash,  and bank reserves at the Fed have been trending down for a time which also means less cash potentially available for short-run lending. But at least to me, those kinds reasons are both plausible and smell faintly of after-the-fact rationalization. Last week was the first time that the Fed  had needed to offer additional cash in the repo market in a decade.

In short, something unexpectedly and without advance warning went wrong with the Fed\’s preferred back-up tool for conducting monetary policy last week. If or when the Fed tries to reduce interest rates again, the functioning of its monetary policy tools will be the subject of hyperintense observation and speculation in financial markets.