Digital Dividends and Development

\”Digital technologies have spread rapidly in much of the world. Digital dividends—the broader development benefits from using these technologies—have lagged behind. In many instances digital technologies have boosted growth, expanded opportunities, and improved service delivery. Yet their aggregate impact has fallen short and is unevenly distributed.\” Those are the opening words of the 2016 World Development Report from the World Bank, which focuses on the theme of \”Digital Dividends.\” The report does a nice job of wrapping its arms around this big unruly topic, with lots of concrete facts and examples. Here\’s a quick overview of some points that caught my eye.

The evidence on the spread of digital technologies around the world in the last decade or so is quite remarkable. The dark solid line rising sharply in this figure shows the spread of mobile phone technology to more than 80% of the population. Internet and mobile broadband are growing too, as the lines at the bottom of the figure show, but access to mobile phones has actually outstripped access to improved water, electricity, improved sanitation, and secondary schools.

But one can view digital access as half-full or half-empty. As the report notes: \”First, nearly 60 percent of the world’s people are still offline and can’t participate in the digital economy in any meaningful way. … The internet, in a broad sense, has grown quickly, but it is by no means universal. For every person connected to high-speed broadband, five are not. Worldwide, some 4 billion people do not have any internet access, nearly 2 billion do not use a mobile phone, and almost half a billion live outside areas with a mobile signal.\”

In what ways can the spread of digital technologies benefit the process of economic development, or economic growth more broadly? Digital technologies are what economists sometimes call \”general purpose\” technologies; they can be broadly applied in a very wide variety of contexts.

\”Perhaps the greatest contribution to growth comes from the internet’s lowering of costs and thus from raising efficiency and labor productivity in practically all economic sectors. Better information helps companies make better use of existing capacity, optimizes inventory and supply chain management, cuts downtime of capital equipment, and reduces risk. In the airline industry, sophisticated reservation and pricing algorithms increased load factors by about one-third for U.S. domestic flights between 1993 and  2007. The parcel delivery company UPS famously uses intelligent routing algorithms to avoid left turns, saving time and about 4.5 million liters of petrol per year. Many retailers now integrate their suppliers in real-time supply chain management to keep inventory costs low. Vietnamese firms using e-commerce had on average 3.6 percentage point higher TFP [total factor productivity] growth than firms that did not use it. Chinese car companies that are more sophisticated users of the internet turn over their inventory stocks five times faster than their less savvy competitors. And Botswana and Uruguay maintain unique ID and trace-back systems for livestock that fulfill requirements for beef exports to the EU, while making the production process more efficient.\”

What about specifically helping the poor in developing countries?

\”The biggest gains from digital technologies for the poor are likely to come from lower information and search costs. Technology can inform workers about prices, inputs, or new technologies more quickly and  cheaply, reducing friction and uncertainty. That can eliminate costly journeys, allowing more time for  work and reducing risks of crime or traffic accidents. Using technology for information on prices, soil  quality, weather, new technologies, and coordination with traders has been extensively documented in agriculture … In Honduras, farmers who got market price information via short message service (SMS) reported an increase of 12.5 percent in prices received. In Pakistan, mobile phones allow farmers to shift to more perishable but higher return cash crops, reducing postharvest losses from the most perishable crops by 21–35 percent. The impacts of reduced information asymmetries tend to be larger when learning about information in distant markets or among disadvantaged farmers who face more information constraints. …\”

\”In 12 countries surveyed in Africa, 65 percent of people believe that their family is better off because they have mobile  phones, whereas only 20 percent disagree (14.5 percent not sure). And 73 percent say mobile phones help save on travel time and costs, with only 10 percent saying otherwise. Two-thirds believe that having a mobile phone makes them feel more safe and secure.\”

To me, one intriguing application of digital technologies is to offer people a proof of identification. One of the most remarkable efforts along these lines is India’s Aadhaar system, in which about 900 million people have a 12-digit number which is linked to biometric information.

\”Identity should be a public good. Its importance is now recognized in the post-2015 development agenda, specifically as a Sustainable Development Goal (SDG) target to “promote peaceful and inclusive societies for sustainable development, provide access to justice for all, and build effective, accountable, and inclusive institutions at all levels.” One of the indicators is to “provide legal identity for all, including birth registration, by 2030.” The best way to achieve this goal is through digital identity (digital ID) systems, central registries storing personal data in digital form and credentials that rely on digital, rather than physical, mechanisms to authenticate the identity of their holder. …

\”India’s Aadhaar program dispenses with the card altogether, providing remote authentication based on the holder’s fingerprints or iris scan. Online and mobile environments require enhanced authentication features—such as electronic trust services, which include e-signatures, e-seals, and time stamps—to add confidence in electronic transactions. Mobile devices offer a compelling proposition for governments seeking to provide identity credentials and widespread access to digital services. In Sub-Saharan Africa, for example, more than half of the population in some countries is without official ID, but more than two-thirds of the residents in the region have a mobile phone subscription. The developing world is home to more than 6 billion of the world’s 7 billion mobile subscriptions, making this a technology with considerable potential for registration, storage, and management of digital identity.  …  Nigeria’s e-ID revealed 62,000 public sector “ghost workers,” saving US$1 billion annually. But the most important benefit may be in better integrating marginalized or disadvantaged groups into society. Digital technologies also enable the poor to vote by providing them with robust identification and by curtailing fraud and intimidation through better monitoring.\”

The report also discusses potential dangers of the spread of digital technology, including risks of greater concentration of large firms, a possible rise in economic inequality, and potential for government control of information. For example, there is some evidence of a \”hollowing out\” of jobs in a number of developing economies. The countries in the figure below are ranked from left to right by the annual change in the share of medium-skill jobs, shown by the medium-green bar. The darkest bars show the change in high-skilled jobs, while the lightest bars show the change in low-skilled jobs. A number of economies (although not China, shown on the far right) are seeing a drop in the share of jobs that involve medium skills.

A couple of final thoughts:

First, the upside thing about digital technologies is that they are general purpose, and have such broad application. The corresponding downside is that such technologies need to be applied, and wisely applied, in a broad variety of contexts to have their most powerful effect. As the World Bank report notes:

Access to the internet is critical, but not sufficient. The digital economy also requires a strong analog foundation, consisting of regulations that create a vibrant business climate and let firms leverage digital technologies to compete and innovate; skills that allow workers, entrepreneurs, and public servants to seize opportunities in the digital world; and accountable institutions that use the internet to empower citizens.

I confess that part of this explanation just made me laugh. Only international bureaucrats at a place like the World Bank could un-selfconsciously write that what\’s first needed is \”regulations,\” because apparently we all know that regulations are what \”create a vibrant business climate.\”  Well, at least we can agree that a favorable business climate is what\’s important! Along with human capital and good governance, or course.

The other point is that although the report is understandably focused on how digital technologies affect productivity and output, it also raises in a number of places the insight that many of the benefits of digital technology may not be captured very well by the economic values alone. For example, the report notes:

The digital revolution has brought immediate private benefits—easier communication and information, greater convenience, free digital products, and new forms of leisure. It has also created a profound sense of social connectedness and global community.

The connectedness and information flows of digital technology provide a very wide range of benefits. In economic terms, we measure those benefits by what users pay for the service. But like many innovations, what is provided was literally not possible to receive–or only possible at an extremely high price–before the innovation occurred. On a personal level, I receive very large benefits from access to the internet, by which I include use of computer, phone, and television. Thanks to the magic of somewhat competitive markets and their ongoing drive for innovation, what I actually pay for those services seems considerably less to me than the value of the benefits I receive.

Some Economics for Martin Luther King Jr. Day

On November 2, 1983, President Ronald Reagan signed the legislation establishing a federal holiday for the birthday of Martin Luther King Jr., to be celebrated each year on the third Monday in January. As the legislation that passed Congress said: \”such holiday should serve as a time for Americans to reflect on the principles of racial equality and nonviolent social change espoused by Martin Luther King, Jr..\” Of course, the case for racial equality stands fundamentally upon principles of justice, not economics. But here are four economics-related thoughts for the day drawn from past posts. (This is a revised and altered version of a post that first ran on this holiday in 2015.)

1) Inequalities of race and gender impose large economic costs on society as a whole, because one consequence of discrimination is that it hinders people in developing and using their talents. In \”Equal Opportunity and Economic Growth\” (August 20, 2012), I wrote:

——–

A half-century ago, white men dominated the high-skilled occupations in the U.S. economy, while women and minority groups were often barely seen. Unless one holds the antediluvian belief that, say, 95% of all the people who are well-suited to become doctors or lawyers are white men, this situation was an obvious misallocation of social talents. Thus, one might predict that as other groups had more equal opportunities to participate, it would provide a boost to economic growth. Pete Klenow reports the results of some calculations about these connections in \”The Allocation of Talent and U.S. Economic Growth,\” a Policy Brief for the Stanford Institute for Economic Policy Research.

Here\’s a table that illustrates some of the movement to greater equality of opportunity in the U.S. economy. White men are no longer 85% and more of the managers, doctors, and lawyers, as they were back in 1960. High skill occupation is defined in the table as \”lawyers, doctors, engineers, scientists, architects, mathematicians and executives/managers.\” The share of white men working in these fields is up by about one-fourth. But the share of white women working in these occupations has more than tripled; of black men, more than quadrupled; of black women, more than octupled.


Moreover, wage gaps for those working in the same occupations have diminished as well. \”Over the same time frame, wage gaps within occupations narrowed. Whereas working white women earned 58% less on average than white men in the same occupations in 1960, by 2008 they earned 26% less. Black men earned 38% less than white men in the typical occupation in 1960, but had closed the gap to 15% by 2008. For black women the gap fell from 88% in 1960 to 31% in 2008.\”

Much can be said about the causes behind these changes, but here, I want to focus on the effect on economic growth. For the purposes of developing a back-of-the-envelope estimate, Klenow builds up a model with some of these assumptions: \”Each person possesses general ability (common to
all occupations) and ability specific to each occupation (and independent across occupations). All groups (men, women, blacks, whites) have the same distribution of abilities. Each young person knows how much discrimination they would face in any occupation, and the resulting wage they would get in each occupation. When young, people choose an occupation and decide how
much to augment their natural ability by investing in human capital specific to their chosen
occupation.\”

With this framework, Klenow can then estimate how much of U.S. growth over the last 50 years or so can be traced to greater equality of opportunity, which encouraged many in women and minority groups who had the underlying ability to view it as worthwhile to make a greater investment in human capital.

\”How much of overall growth in income per worker between 1960 and 2008 in the U.S. can be explained by women and African Americans investing more in human capital and working more in high-skill occupations? Our answer is 15% to 20% … White men arguably lost around 5% of their earnings, as a result, because they moved into lower skilled occupations than they otherwise would have. But their losses were swamped by the income gains reaped by women and blacks.\”

At least to me, it is remarkable to consider that 1/6 or 1/5 of total U.S. growth in income per worker may be due to greater economic opportunity. In short, reducing discriminatory barriers isn\’t just about justice and fairness to individuals; it\’s also about a stronger U.S. economy that makes better use of the underlying talents of all its members.

_____

2) Roland Fryer delivered the Henry and Bryna David Lecture at the National Academy of Sciences on the subject of \”21st Century Inequality: The Declining Significance of Discrimination.\” I discussed this lecture in \”The Journey to Becoming a School Reformer\” (February 13, 2015). As Fryer tells the story, he was \”asked in 2003 to explore the reasons for the social inequality in the United States.\” Fryer said:

\”In two weeks I reported back that achievement gaps that were evident at an early age correlated with many of the social disparities that appeared later in life. I thought I was done. But the logical follow-up question was how to explain the achievement gap that was apparent in 8th grade. I’ve been working on that question for the past 10 years. I am certainly not going to tell you that discrimination has been purged from U.S. culture, but I do believe that these data suggest that differences in student achievement are a critical factor in explaining many of the black-white disparities in our society. It is no longer news that the United States is a lackluster performer on international comparisons of student achievement, ranking about 20th in the world. But the position of U.S. black students is truly alarming. If they were to be considered a country, they would rank just below Mexico in last place among all Organization of Economic Cooperation and Development countries. … 

\”When do U.S. black students start falling behind? It turns out that development psychologists can begin assessing cognitive capacity of children when they are only nine months old with the Bayley Scale of Infant Development. We examined data that had been collected on a representative sample of 11,000 children and could find no difference in performance of racial groups. But by age two, one can detect a gap opening, which becomes larger with each passing year. By age five, black children trail their white peers by 8 months in cognitive performance, and by eighth grade the gap has widened to twelve months.\”

Fryer goes on to describe his remarkable work that seeks to learn from the experience of high-performing charter schools that do very well in bringing many African-American children from low-income families up to expected grade-level academic performance–and better–and then applying those lessons in the context of actual big-city public schools. As I wrote in that blog post: 

It is remarkable to me that most of the cognitive performance gap for eighth-graders is already apparent for five year-olds. As I\’ve commented on before in \”The Parenting Gap for Pre-Preschool\” (September 17, 2013), one possible reaction here is to think more seriously about home visitation programs for at-risk children in the first few years of life. 

3) For those who would like to know more about the economics of thinking about cause-and-effect in discrimination issues, a starting point might begin with this interview with Glenn Loury (July 2, 2014). Here\’s a slice of the discussion from that post:

_____

A standard approach to studying discrimination in labor markets is to collect data on what people earn and their race/ethnicity or gender, along with a number of other variables like years of education, family structure, region where they live, occupation, years of job experience, and so on. This data lets you answer the question: can we account for differences in income across groups by looking at these kinds of observable traits other than race/ethnicity and gender? If so, a common implication is that the problem in our society may be that certain groups aren\’t getting enough education, or that children from single-parent families need more support–but that a pay gap which can be explained by observable factors other than race/ethnicity and gender isn\’t properly described as \”discrimination.\” Loury challenges this approach, arguing that many of the observable factors are themselves the outcome of a history of discriminatory practices. He says:

\”By that I mean, suppose I have a regression equation with wages on the left-hand side and a number of explanatory variables—like schooling, work experience, mental ability, family structure, region, occupation and so forth—on the right-hand side. These variables might account for variation among individuals in wages, and thus one should control for them if the earnings of different racial or ethnic groups are to be compared. One could put many different variables on the right-hand side of such a wage regression.
Well, many of those right-hand-side variables are determined within the very system of social interactions that one wants to understand if one is to effectively explain large and persistent earnings differences between groups. That is, on the average, schooling, work experience, family structure or ability (as measured by paper and pencil tests) may differ between racial groups, and those differences may help to explain a group disparity in earnings. But those differences may to some extent be a consequence of the same structure of social relations that led to employers having the discriminatory attitudes they may have in the work place toward the members of different groups.

So, the question arises: Should an analyst who is trying to measure the extent of “economic discrimination” hold the group accountable for the fact that they have bad family structure? Is a failure to complete high school, or a history of involvement in a drug-selling gang that led to a criminal record, part of what the analyst should control for when explaining the racial wage gap—so that the uncontrolled gap is no longer taken as an indication of the extent of unfair treatment of the group?

Well, one answer for this question is, “Yes, that was their decision.” They could have invested in human capital and they didn’t. Employer tastes don’t explain that individual decision. So as far as that analyst is concerned, the observed racial disparity would not be a reflection of social exclusion and mistreatment based on race. … But another way to look at it is that the racially segregated social networks in which they were located reflected a history of deprivation of opportunity and access for people belonging to their racial group. And that history fostered a pattern of behavior, attitudes, values and practices, extending across generations, which are now being reflected in what we see on the supply side of the present day labor market, but which should still be thought of as a legacy of historical racial discrimination, if properly understood.

Or at least in terms of policy, it should be a part of what society understands to be the consequences of unfair treatment, not what society understands to be the result of the fact that these people don’t know how to get themselves ready for the labor market.

 4) Extensions in the period of copyright over time have meant that the speeches and writings of Martin Luther King Jr. and others in the U.S. civil rights movement are not easily available to, say, students in schools or the general public. This was one example I discussed in a post on \”Absurdities of Copyright Protection\” (May 13, 2014). The post discusses a paper by Derek Khanna called  \”Guarding Against Abuse: Restoring Constitutional Copyright,\” published as R Street Policy Study No. 20 (April 2014). Here, I\’ll just quote a couple of paragraphs from Khanna.

Excessively long copyright terms help explain why Martin Luther King’s “I Have a Dream” speech is rarely shown on television, and specifically why it is almost never shown in its entirety in any other form. In 1999, CBS was sued for using portions of the speech in a documentary. It lost on appeal before the 11th Circuit. If copyright terms were shorter than 50 years, then those clips would be available for anyone to show on television, in a documentary or to students. When historical clips are in the public domain, learning flourishes. Martin Luther King did not need the promise of copyright protection for “life+70” to motivate him to write the “I Have a Dream” speech. (Among other reasons, because the term length was much shorter at the time.) …

Eyes on the Prize is one of the most important documentaries on the civil rights movement. But many potential younger viewers have never seen it, in part because license requirements for photographs and archival music make it incredibly difficult to rebroadcast. The director, Jon Else, has said that “it’s not clear that anyone could even make ‘Eyes on the Prize’ today because of rights clearances.” The problems facing Eyes on the Prize are a result of muddied and unclear case law on fair use, but also copyright terms that have been greatly expanded. If copyright terms were 14 years, or even 50 years, then the rights to short video clips for many of these historical events would be in the public domain.

Franchise the National Parks?

The idea of franchising the national parks raises images of Mickey Mouse ears on top of Half-Dome at Yosemite, or the McDonald\’s \”golden arches\” as a scenic backdrop to the Old Faithful geyser in Yellowstone. But that\’s not what Holly Fretwell has in mind in her essay, \”The NPS Franchise:
A Better Way to Protect Our Heritage,\” which appears in the George Wright Forum (2015, vol. 32, number 2, pp. 114-122). Instead, she is suggesting that a number of national parks might be better run as independent nongovernment conservation-minded operations with greater control over their own revenues and spending. In such an arrangement, the role of the National Park Service would be to evaluate the financial and environmental plans of possible franchisees, provide brand-name and a degree of logistical support, and then to make sure the franchisees announced plans were then followed up in the future.

To understand the impetus behind Fretwell\’s proposal, you need to first face the hard truth that the national parks have severe financial problems, which are manifesting themselves both in decaying infrastructure for human visitors and also in a diminished ability to protect the parks themselves (for example, sewer systems in parks affect both human visitors and environmental protection). Politicians are often happy to set aside more parkland, but spending the money to manage the land is a harder sell. If you accept as a basic constraint that federal spending on park maintenance isn\’t going to rise, or at least not rise sufficiently, then you are driven to consider other possibilities. Here\’s Fretwell on the current problems of the National Park Service (footnotes omitted):

As it enters its second century, NPS faces a host of challenges. In 2014, the budget of the National Park Service was $2.6 billion. The maintenance backlog is four times that, at $11.5 billion and growing. According to the National Parks Conservation Association (NCPA), about one-third of the shortfall is for “critical systems” that are essential for park function. Without upgrades, many park water and sewer systems are at risk. A water pipe failure in Grand Canyon National Park during the spring of 2014 cost $25,000 for a quick fix to keep water flowing, but is estimated to cost about $200 million to replace. Yellowstone also has antiquated water and wastewater facilities where past failures have caused environmental degradation. Sewer system upgrades in Yosemite and Grand Teton are necessary to prevent raw sewage from spilling into nearby rivers. Deteriorating electrical cables have caused failures in Gateway National Recreation Area and in Glacier’s historic hotels. Roads are crumbling in many parks. They are patched rather than restored for longevity. Only 10% of park roads are considered to be in better than “fair” condition. At least 28 bridges in the system are “structurally deficient,” and more than one-third of park trails are in “poor” or “seriously deficient” condition.

Cultural heritage resources that the parks are set aside to protect are also at risk. Only 40% of park historic structures are considered to be in “good” or better condition and they need continual maintenance to remain that way. Exterior walls are weakening on historic structures such as Perry’s Victory and International Peace Memorial in Ohio, the Vanderbilt Mansion in New York, and the cellhouse in Golden Gate National Recreation Area in California. Weather, unmonitored visitation, and leaky roofs are degrading cultural artifacts. Many of the artifacts and museum collections have never been catalogued. … 

Even though the NPS maintenance backlog is four times the annual discretionary budget, rather than focus funding on maintaining what NPS already has, the system continues to grow. … The continual expansion of park units and acreage without corresponding funding is what former NPS Director James Ridenour called “thinning the blood.” …  The national park system has grown from 25.7 million acres and about 200 units in 1960 to 84.5 million acres and 407 units in 2015. Seven new parks were added under the 2014 National Defense Authorization Act and nine parks were expanded. The growth came with no additional funding for operations or maintenance—more “thinning the blood.”

I\’ve had great family vacations in a number of national parks since I was a child. They were inexpensive to visit then, and they remain cheap. Indeed, there\’s sometimes an odd moment, when visiting a national park, when you realize that what you just spent at the gift shop, or for a family meal, considerably exceeds what you spent to enter the park. Fretwell writes:

Numerous parks have increased user and entrance fees for the 2015 summer season after
seeking public input and Washington approval. Even with the higher fees, a visit to destination parks like Grand Canyon and Yellowstone costs $30 for a seven-day vehicle permit, or just over $1 per person per day for a family of four. … The current low fees to enter units of the national park system typically make up a small portion of the total park visit expense. It has been estimated that the entry fee is less than 2% of park visit costs for visitors to Yellowstone and Yosemite. The bulk of the expenditures when visiting destination parks go to lodging, travel, and food. Higher fees have little effect on visitation to most parks. … Even modest fees (though sometimes large fee increases) could cover the operating costs of some destination parks. About $5 per person per day could cover operations in Grand Canyon National Park, as would just over $10 in Yellowstone.

An obvious question here is why the parks can\’t just raise fees on their own, but of course, that choice runs into political constraints as well. It is at least arguable that franchisees could spell out the facilities that need renovating and building, along with other services that could be offered, and then also be able to charge the fees that would cover the costs.

Fretwell recognizes that not all national parks will have enough visitors to work well with a franchise model (for example, some of the huge national parks in Alaska), and a need for direct government spending on such parks will remain. But it\’s worth remembering that national park visitors tend to have above-average income levels. A franchise proposal can be understood as a way of circumventing the political constraints that first prevent national parks from collecting money, and then don\’t allocate sufficient financial resources from other government revenues. A group of franchise proposals would also give national parks a way to move away from \”thinning the blood\”–that is, focusing heavily on how to persevere with tight and inflexible financial constraints–and instead offer an infusion of new ideas and how they might be financed.

War on Cancer: Redux

In his 1971 State of the Union Address, President Richard Nixon launched what came to be known as the War on Cancer:

“I will also ask for an appropriation of an extra $100 million to launch an intensive campaign to find a cure for cancer, and I will ask later for whatever additional funds can effectively be used. The time has come in America when the same kind of concentrated effort that split the atom and took man to the moon should be turned toward conquering this dread disease. Let us make a total national commitment to achieve this goal.”

And now, 45 year later in the 2016 State of the Union address, President Barack Obama is relaunching the War on Cancer:

\”Last year, Vice President Biden said that with a new moonshot, America can cure cancer. Last month, he worked with this Congress to give scientists at the National Institutes of Health the strongest resources that they’ve had in over a decade. So tonight, I’m announcing a new national effort to get it done. And because he’s gone to the mat for all of us on so many issues over the past 40 years, I’m putting Joe in charge of Mission Control. For the loved ones we’ve all lost, for the families that we can still save, let’s make America the country that cures cancer once and for all.\”

So how did that first War on Cancer turn out? At the tail end of 2008, just before President Obama took office, David Cutler took a stab at answering that question in \”Are We Finally Winning the War on Cancer?\” which appeared in the Fall 2008 issue of the Journal of Economic Perspectives (22:4, pp. 3-26). Here\’s a figure showing the mortality rate from cancer over time.

As Cutler reports, spending on cancer research and treatment did rise steadily after Nixon\’s speech at about 4-5% per year. But as the figure shows, cancer death rates kept rising for a time as well, at about 8% per year during the 1970s and 1980s. By 1997, the New England Journal of Medicine ran an article noting these trends called \”Cancer Undefeated.\”  Perhaps inevitably, that article was soon followed by a sharp decline in cancer mortality. Apparently, Obama is re-enlisting in a war on cancer that has been going pretty well for a couple of decades. 
But the War on Cancer has been fought with several different tools–and biomedical research on a \”cure for cancer\” isn\’t the biggest one. Cutler focuses on four main types of cancer: lung, colorectal, female breast, and prostate. After reviewing the evidence on each, he wrote:  

\”[B]ehaviors, screening, and treatment advances for the four cancers I consider were each important in improved cancer survival. Together, they explain 78 percent of the reduction in cancer mortality between 1990 and 2004. Thirty-five percent of reduced cancer mortality is attributable to greater screening—partly through earlier detection of disease, and partly through removal of precancerous adenomas in the colon and rectum. Behavioral factors are next in importance, at 23 percent; the impact of smoking reductions on lung cancer is the single most important factor in this category. Finally, treatment innovation is third in importance, accounting for 20 percent of reduced mortality.

\”The relative importance of these different strategies seems surprising, but it is easily understandable. Despite the vast array of medical technologies, metastatic cancer remains incurable and fatal. The armamentarium of medicine can delay death, but cannot prevent it. Thus, technologies in metastatic settings have only limited effectiveness. Far more important is making sure that people do not get cancer in the first place (prevention) and that cancer is caught early (screening), when it can be successfully treated.\”

From this perspective, emphatic calls for a \”cure for cancer\” highlight a bias in US medicine, in favor of later-stage interventions which are often at high-cost, rather than early stage interventions of prevention and early screening that may often happen pretty much outside the health care system, but have the potential save many more lives at much lower cost.
David H. Howard, Peter B. Bach, Ernst R. Berndt, and Rena M. Conti looked at \”Pricing in the Market for Anticancer Drugs\” in the Winter 2015 issue of the Journal of Economic Perspectives (29:1, pp. 139-62). As I\’ve discussed on this blog, before a new anti-cancer drug is approved, various clinical trials and studies are done, and these studies provide an estimate of the median expected extension of life as a result of using the drugs. Then based on the market price of the drug when it is announced, it\’s straightforward to calculate the price of the drug per year of life gained. Their calculations show that back in 1995, new anti-cancer drugs reaching the market were costing about $54,000 to save a year of life. By 2014, the new drugs were costing about $170,000 to save a year of life. This is an increase in cost per year of life saved of roughly 10% per year.
As both Nixon and Obama can attest, calling for a \”cure for cancer\” with an analogy to putting a person on the moon has been political magic for 45 years now. But if the \”cure for cancer\” rhetoric creates the expectation of a magic pill that lets everyone go back to smoking cigarettes again, I fear that it is both missing the point and even raising false hopes for cancer patients and their families. I\’m pretty much always a supporter of additional research and development, and like everyone else, I hear anecdotes (which I cannot evaluate) about how some great new anti-cancer drugs are already in the research pipeline. But a focus on developing more extremely expensive anti-cancer drugs that often provide only very limited gains in average life expectancy (in a number of cases, only a few months) shouldn\’t be the primary approach here. At least for the near-term and probably the medium-term, too,  primary tools that can keep cancer mortality on a downward trends are more likely to be prevention and early detection, along with ongoing improvements in the health-effectiveness and cost-effectiveness of treatment, not a \”moonshot\” for a cure. 
Full disclosure: I\’ve been Managing Editor of the Journal of Economic Perspectives since 1987, and so part of my paycheck came from working to publish the two articles mentioned here. 

Why Do People Say They Aren\’t In the Labor Force?

Although the unemployment rate has dropped to 5%, the labor force participation rate has continued its long-term decline. For those not clear on the difference in these terms, the government only counts people as unemployed if they are both without a job and also looking for a job.  A person who is out of a job but not looking for one is not counted as \”unemployed,\” but instead is counted as out of the labor force.

This definition makes some sense: after all, it would seem peculiar to count those who are happily retired or spouses who stay home by choice as \”unemployed.\” But the definition also raises a legitimate concerns that the drop in unemployment is only in part the healthy sign of a recovering economy, but could also be the unhealthy sign of potential who have given up on looking for a decent job with decent pay. How can we distinguish between these possibilities? One piece of evidence is to look at the reasons that those who are out of the labor force give as to why they are not looking for a job. Steven F. Hipple of the US Bureau of Labor Statistics pulls together evidence from a US Census Bureau survey called the Annual Social and Economic Supplement, which is part of the  Current Population Survey. \”People who are not in the labor force: why aren\’t they working?\” appears in the BLS newsletter Beyond the Numbers (December 2015, vol. 4, #15).

As a starting point, here\’s the overall civilian labor force participation rate. The long rise in the share of US workers in the labor force from about 1970 up through the mid 1990s is usually associate with a much larger share of women entering the (paid) labor force. There\’s a peak in the late 1990s, and a decline since then, which suggests that the main causes of the decline are long-run, not rooted in the Great Recession of 2007-2009.

Hipple breaks down this data in various ways. Here, I\’ll focus on some breakdowns by age and gender. First, the age groups from 16-19 and from 20-24 have seen large declines in labor force participation, as shown in the figures below.

shaaflkj

In answers to Census questions, Hipple points out that basically all this decline from 2004-2014, at least, can be accounted for the larger number who say that they aren\’t looking for work because it would conflict with school. Here are Hipple\’s charts for these two age groups.

Of course, economists are always suspicious of answers that people give in response to survey data. When people in the 16-24 age bracket say that they aren\’t looking for a job because they are going to school, are they saying that they just aren\’t attracted by the jobs on offer? If they had ready access jobs that paid (say) twice the minimum wage were readily available to them, maybe more of them would squeeze some time into their schedule for part-time work. It also seems to me that there\’s a change in cultural expectations here about whether high school and college students are expected by their peers and parents to find at least a part-time job. But overall, the decline in labor force participation by these young adults because of going to school doesn\’t strike me as a major social problem.

At the other end of the age distribution, here\’s the labor force participation rate for those 55 and older. It steadily falls until the early 1990s, largely as a result of people retiring earlier, then steadily rises up to about 2010, and since then has leveled out.

Hipple\’s survey evidence compare 2004 to 2014, a period when labor force participation among older workers is rising. The survey evidence is that older workers are less likely to say that they are out of the labor force because of retirement or home responsibilities, but more likely to say that they are out of the workforce because they are ill or disabled. Here are Hipple\’s figures for the 55-64 age group and then 65 and over age group.

From a medium-run perspective, surely one of the group\’s hardest-hit by the Great Recession were the near-elderly who lost jobs and ended up forced into retirement several years earlier than expected. Many in this group also found that in an economic environment of low interest rates, their savings brought in less interest income than they had reason to expect. But from a long-run perspective, the pattern for labor force participation of those over 55 is consistent with two counterbalancing factors: on one side, average dates of retirement are moving back and people are working until later ages, which tends to raise labor force participation for this group; but on the other side, the share of the over-55 population that is well past the common age of retirement is rising, which tends to lower labor force participation rates for this group.

Finally, what about those in the 25-54 age bracket, sometimes called \”prime age\” workers? In some ways, when the issue is whether the fall in unemployment is just masking people leaving the labor force altogether, this age group is of greatest concern. This group shows a steadily rising labor force participation rate from 1950 up to about 1990, then a levelling out of that growth in the 1990s, and decline since about 2000.

In this case, Hipple\’s discussion emphasizes the different reasons given by men and women in this age bracket as to why they are out of the labor force. Men mostly say that they are out of the labor force because they are ill or disabled. Women mostly say that they are out of the labor force because of home responsibilities.

Again, one should be alert the likelihood that the reasons people give for being out of the labor force are strongly shaped both by social custom and the labor market opportunities available to them. For example, one suspects that in 2016, it remains true that it is more socially acceptable and also a more accurate reflection of the division of labor in many households for a woman to report \”home responsibilities\” as a reason for not working than it is for a man. Hipple points out that men with lower levels of education (high school education, or less) are more likely to be out of the labor force as a result of being ill or disabled, and women with lower levels of education are more likely to be out of the labor force as a result of home responsibilities. If more low-skilled men had jobs available that didn\’t involve a lot of physical labor, it\’s likely that fewer of them would be report being ill or disabled. If more low-skilled women had decently-paid jobs available, they would have more of a reason to rearrange home responsibilities to suit the job.

The reasons people give for economic actions aren\’t the end of the story, but they do matter, because they tell us something about how people perceive their situations.

I\’ve commented on the falling labor force participation rate and its relationship to the unemployment rate a number of times in the past. Here are a few of those posts with various differing angles: For international context, \”Putting U.S. Labor Force Participation in Context\” (February 27, 2015). For discussions of how to interpret the fall in unemployment rates and the overall health of the labor market in the context of the dropping labor force participation rates, see \”How Tight is the US Labor Market?\” (October 26, 2015),  \”Underutilized Labor in the US Economy\” (November 24, 2014), and \”Unemployment and Labor Force Participation: Revisiting the Puzzle\” (July 23, 2014).

Marriage: Homogamy or Heterogamy

Do people have a tendency to marry those with similar educational and other background to themselves? Social scientists, with their gift for turning simple ideas into jargon, call this \”homogamy.\” Conversely, if people have a tendency to marry those with different educational or other background, social scientists would say that marriage is \”heterogamous.\” At a given point in time, the question of whether marriage is homogamous or heterogamous reveals something about the degree of mixing across socioeconomic classes in society. Over time, a society with greater homogamy is likely to also have more inequality. W

Robert D. Mare compiles and presents evidence on \”Homogamy in Two Gilded Ages: Evidence
from Intergenerational Social Mobility Data,\” in the Annals of the American Academy of Political and Social Science (January 2016, pp 117-139). (The journal is not freely available online, but many readers will have access through library subscriptions.) The earlier research on this subject looked at data going back to about 1940, and it produced measures of the extent to what proportion of marriages were between people with a similar education level which looked like this:

An obvious concern with this graph is what to make of the data for the single data point of 1940, based on Census data. Is there a reason why marriage homogamy might have dropped so much from 1940 to 1960? Or is there just something in the way that 1940 data was collected or tabulated that makes it not directly comparable to the later data? In this paper, Mare makes use of previously unutilized data data on what adults report about the marriages of their parents to extend this data back in time. He finds that there is indeed a downward pattern of marriage homogamy in the first half of the 20th century.

It\’s tricky and perhaps impossible to provide a single rigorous explanation of this U-shaped pattern of marriage homogamy that works equally well when homogamy goes from high to low and when it goes from low to high. But as Mare writes (citations omitted):

\”Two broad sociodemographic trends that provide a context for assortative mating patterns are nonetheless worth noting. First, the comparatively low level of educational homogamy for young couples in the early 1950s coincides with the century’s lowest median age at first marriage. The median age at first marriage was approximately 26 for men and 22 for women in 1900, declined steadily to approximately 23 for men and 20 for women in 1950 and increased thereafter to approximately 27 for men and 25 for women in 2000. When couples marry early, one or both partners may have not yet completed their schooling or have only just left school. Although schools may structure marriage markets, couples who marry early may not, at the time of marriage, be able to take full account of the characteristics of their partners that are associated with their educational attainment. Conversely, when couples marry later, their preferences and opportunities for marriage may be more strongly based on the “realized” characteristics of their potential partners, which may, to a significant degree, be a result of their partners’ educational attainments. …

\”A second important trend is in the differential life chances associated with educational attainment, perhaps the most important of which are the economic returns to schooling. When individuals expect that earnings and income gaps between educational groups will be large during their adult years, they not only have a greater incentive to stay in school themselves, but also may place more weight on the educational attainments of prospective marriage partners. Conversely, if the economic gaps between educational attainment levels are small, factors other than schooling are more likely to govern
educational choice. During the latter half of the twentieth century and especially since the 1960s, the differences in earnings across individuals with varying amounts of education grew markedly, a trend that has a strong positive association with various indicators of educational assortative mating for couples who married during this era.\” 

In short, social and economic inequality clearly interact with marriage homogamy. On one side, in a society with higher levels of inequality, people are less likely to interact with others from different socioeconomic groups in a way that would lead to heterogamy. On the other side, a society with more marriage homogamy will will be one in which those with higher wage and employment prospects are marrying each other. As a result, differences in household income will be larger. In addition, those households will have greater resources to invest in their children, which could lead to a greater persistence of inequality across generations. Many different factors affect marriage and economic inequality, but still, it seems unlikely to be a coincidence that homogamy was associated with greater inequality of the wage distribution and higher returns to schooling both early and late in the 20th century, while heterogamy was associated with a more equal distribution of income and lower returns to schooling in the middle of the 20th century.

Multipolarity: The Next Step After Globalization?

The world economy during the last few decades has experienced \”globalization,\” a broad and admittedly vague term which refers among other factors to a rise in the ratio of world exports to world GDP, as well as the pattern that an ever-rising share of global economic output is happening in the \”emerging markets\” rather than in the traditional high-income countries. But since the Great Recession, there has been a slowdown in the rise of global trade. If globalization falters, what might evolve next? The Credit Suisse Research Institute asks in a September 2015 report: \”The End of Globalization or a More Multipolar World?\” The report makes this argument:

Our sense is that the world is currently in a benign transition from full globalization to multipolar state, though this is not complete. … We find evidence of region-specific trends in economic, social and technological factors that are distinct from aggregate world trends. … In this context, we are increasingly mindful of George Orwell’s 1984, where he divided the world into three regions – Oceania, East Asia and Eurasia on the basis of economic power and form of government. Although it requires some conceptual shoehorning we could well fit the major countries of the world into the following categories: Oceania (USA, Canada and Latin America), Eurasia (Europe, the Middle East and Russia), East Asia (Africa, Asia and the Pacific economies). Some countries like the UK, Japan and Australia could just as easily fit in two categories. In today’s world, Orwell’s classification is not a ‘clean’ one but the three broad regions he has set out give a sense as to how a multipolar world might evolve at a high level.

What is some of the evidence for the movement to a multipolar world economy? One chunk of evidence is a de-emphasis of the role of international organizations like the World Trade Organization in favor of regional and preferential trade agreements, along with bilateral investment treaties. Here are some trends in each of these areas.

 When it comes to international finance, the world economy still operates on more of a globalized US dollar standard than on a multipolar standard that would give greater weight to, say, the euro and the Chinese renminbi. This doesn\’t seem likely to change in the short run, given the economic turmoil in China and across the euro zone. But in the medium- and long-run, it seems very likely that this will change. The figure below offered a longer-run perspective. When it comes to currency reserves, largely held at central banks, the US dollar still rules–but it\’s a lower share than back in the 1970s, and it\’s not a rising share.

The Credit Suisse report looks at a number of other dimensions of a multipolar world, like whether companies are continuing to increase their investments across national borders at the same rate, and the levels at which issues of governance and conflict are happening. Michael Sullivan summarizes these findings in this way:

\”Our analysis of corporate investment and revenue growth shows that globalization remains intact in terms of consumption and marketing patterns, there appears to have been a retrenchment in cross-border investment by corporates. … We read these results as pointing towards a more multipolar world where companies continue to sell across borders but are more cautious in investing across them.

\”In terms of governance, the impetus provided to the spread of democracy by globalization looks to have reached a limit, with less democratic forms of government being perceived to produce economic success and new regional institutions replacing the activities of world ones. … Geopolitically, conflict now takes place more within countries and regions, than between countries.

\”The world is increasingly undercut by faultlines in terms of religion, climate change, language, military development and indebtedness to name a few.\” 

If a multipolar world is coming, it behooves the United States to consider what our core alliance would look like. I\’ve posted here before on \”The North American Vision\” (November 5, 2014) of stronger US ties with Canada and Mexico, but I suspect that broad vision is should be expanded to include Latin America as well.

Finally, here\’s an angle on the very long-run evolution of the global economy: a breakdown of what share of the global economy was represented by different countries or regions over the last millenium. Maybe the easiest way to read this figure is to start from the bottom dark green area, representing China, and then work your way up through the other countries in the order they appear in the key.

About 1,000 years ago, the world economy was dominated by China and India.  You can see their shares of global output dwindling over time, and the gradual rise of the United States (the second shaded area from the bottom), along with the growing importance of Japan, other countries in Asia, and Latin America during much of the 20th century. In just the last few decades, China\’s importance grows substantially and India\’s importance grows noticeably, too. Indeed, a few decades down the road we could be headed back to a global economy in which the two largest players are, again, China and India.

Bernanke on the Fed, the US Dollar, and the Global Economy

  • Has the Federal Reserve been manipulating the value of the US dollar downward to give US exports a boost? 
  • Are Federal Reserve policies causing swings in capital flows to and from emerging-market economies, in a way that creates financial and economic instability elsewhere in the world? 
  • Does the dominant role of the US dollar in international economic transactions provide large economic benefits to the US economy? 

Ben Bernanke focuses on these three questions delivered the Mundell-Fleming lecture, which he delivered at the IMF\’s 16th Jacques Polak Annual Research Conference on November 5, 2015, on the subject, \”Federal ReservePolicy in an International Context.\” Video of the lecture being delivered is available, too.

On the first question, concerning whether the Fed is pushing for a lower US dollar exchange rate, the answer at first glance is \”no,\” and upon further reflection is still \”no.\” At first glance, the value of the US dollar did fall a bit just after the Great Recession, but it has risen since then–so it\’s pretty much impossible to make a case that the Fed has been pursuing a cheap-dollar export-spurring policy. Here\’s a figure from Bernanke\’s paper showing the exchange rate of the dollar in the last decade. Whether you compare it to just major currencies, or other trading partners, or the broad index of all trading partners, the same rough pattern emerges.

Moreover, Bernanke points out that when monetary easing in the US causes the exchange rate value of the US dollar to fall, there are two effects on other countries. One is that US exports are cheaper in world markets, which tends to hurt other economies, but the other effect is that the US economy is stimulated to expand more rapidly, which tends to help other countries selling to the US market. Bernanke writes:

\”Notably, although monetary easing usually leads to a weaker currency and thus greater trade competitiveness, it also tends to increase domestic incomes, which in turn raises home demand for foreign goods and services. … In the case of the United States, … the available evidence suggests that these two effects of monetary policy largely offset, limiting the overall effect on US trading partners.\”

On the second issue, about whether Federal Reserve policy may cause financial swings in other countries, Bernanke offers a more cautious answer. The issue here is that with very low US interest rates in recent years, a certain amount of international investment capital has been headed into financial markets of emerging economies, pushing up stock markets and asset prices in those countries. With the Fed now starting to raise US interest rates, some of that money will now exit the emerging markets. The problem arises because financial markets in  emerging markets can be quite small by global standards, so the start-stop-reverse movement of what would be a fairly modest amount of capital by the standards of the US or the EU economy can severely shake up a smaller emerging market economy.

Bernanke acknowledges the possibility of such disruptions. He also points out that economic policy-making in other countries has a lot to do with whether they are susceptible to a danger of volatile international capital movements. In an earlier episode, Bernanke points out \”commentators referred to the “fragile five” emerging markets—Turkey, Brazil, India, South Africa, Indonesia—whose initial conditions, structural weaknesses, and macroeconomic policies made them more vulnerable to global financial developments.\” Bernanke adds:

\”Importantly, `improvement\’ in the financial sphere does not necessarily require continuous liberalization. … [I]n somecases, macroprudential policies and even capital controls may be needed to manage credit and capital flows during the process of reform. … Financial regulation and supervision are also the obvious tools to use against other plausible sources of spillovers, including currency mismatches in the banking system, excessive cyclicality in lending standards, and opaque and illiquid markets.\”

Bernanke is clearly correct that Federal Reserve actions will affect other countries in a variety of ways, and that it\’s obviously impossible to set Fed policy in a way that would be equally satisfactory to all countries in the world. But that said, his response on this point isn\’t 100% persuasive. Sure, if economic policy-makers in other countries are smart, alert, and responsive, they can address these dangers of start-and-stop international capital flows. But economic policy-makers in other countries will at times be obsessed with their own domestic economy and politics, and as a result, Fed actions will sometimes bring considerable disruption.

On the third question, the extent to which the US economy benefits from the use of the dollar in international transactions, Bernanke points out that the use of the US dollar in international transactions has in a number of ways been quite beneficial to the global economy. In comparison, the benefits of international use of the US dollar to the US economy have diminished with time and are relatively small.  On the value of the US dollar in international transactions, Bernanke writes:

\”[I]n practice it has benefited the global economy in several ways. First, … over the past three decades or so the Federal Reserve has been successful at keeping inflation low and stable. Consequently, the dollar has served its principal function as global numeraire, namely, to maintain a stable value in terms of goods and services. 

\”Second, there is a strong and growing global demand for safe, liquid assets, which the United States—with its political stability and deep, liquid financial markets—has been generally successful in providing. The US also maintains open trade and capital accounts, preserving international access to US assets. 

\”Third, dollar assets have proved to be a valuable hedge for foreign holders against downside geopolitical and financial risks (Gourinchas et al., 2010; Obstfeld, 2010). Broadly speaking, US international liabilities are in the form of relatively liquid, fixed-income assets, notably government bonds and government-backed mortgage securities, whereas US international assets tend to be riskier, e.g., equities. For this reason, and because the dollar is a “safe haven” currency that tends to appreciate when global risks increase, the US net asset position improves during tranquil times but worsens during periods of stress. Gourinchas et al. (2010) calculate that about $2 trillion was transferred from the United States to other countries via valuation changes during the financial crisis. Obviously, the US role as provider of hedge assets is not the result of conscious policy. Instead, it reflects US comparative advantages in providing safe liquid liabilities and investing in riskier foreign assets, as well as the dollar’s role as a safe haven.

Fourth, the Federal Reserve has shown its willingness to serve as a lender of last resort to dollar-based lenders.\”

Concerning the question of how the widespread international use of the US dollar specifically benefits the US economy, as Bernanke points out, it\’s not 1970 any more:

\”The dollar’s monopoly power has also been eroded over recent decades, in that assets denominated in euros, British pounds, and yen have become increasingly viable not only as reserve currencies but for other purposes, such as posting collateral. … [W]e shouldn’t be overly exercised over controversies about whether the dollar will retain its pre-eminence, the future of the renminbi as a reserve currency, and so on. These debates are more about symbolism than substance. In purely economic terms, the universal usage of English, say, is far more valuable to the United States than the broad use of the dollar.\”

Back to Basics: What Drives US Economic Growth?

I like to say that the formula for economic growth is simple: it\’s a mixture of more workers, improved human capital, increases in physical capital, and better technology–all operating in an economic environment that provides incentives for efficiency and innovation. Rebecca M. Blank fleshes out this framework in \”What Drives American Competitiveness?\” which was delivered as the 2015 Daniel Patrick Moynihan Lecture on Social Science and Public Policy and published in the Annals of the American Academy of Political and Social  Science (January 2016, 663, pp. 8-30).

Blank carries out a \”growth decomposition\”–that is, looking at the actual rise in real GDP during the 45 years from 1970 to 2014, and attributing it to the following causes: \”GDP growth = growth in hours worked (25%) + growth in labor quality (10%) + capital deepening (39%) + TFP (26%).\” The phrase \”capital deepening\” refers to a higher amount of average physical capital per worker. \”TFP\” stands for \”total factor productivity,\”  a measure of the growth of productivity over time.

How are these building blocks of economic growth expected to evolve in the next 10-20 years? The answer will imply how US growth will evolve.

For example, the total hours worked in the US economy was actually a little lower in 2014 than it was back in 2000. Blank writes:

\”Sheer growth in the number of workers has explained 25 percent of economic growth over the past 45 years. But the three big elements that drove this growth—immigration, the baby boom population bulge, and increases in women’s labor force involvement—are now either growing more slowly or moving in the opposite direction. The result is recent small declines in the work hours of the population. It is hard to see how work hours will grow substantially in the years ahead.\” 

What about future trends in human capital? A standard method for estimating the amount of human capital is to look at education levels and job experience. Blank focuses on education levels. She points out that at the lower end of educational achievement: \”[H]igh school graduation has largely stalled out at around 88 percent of the population for both men and women. This means that a substantial share of the population is still entering the workforce without even a high school degree. Furthermore, a growing share of high school graduates hold GED degrees, which may not provide even the same skill level as a high school degree. From everything we know about the labor market, these young adults will face low wages and higher unemployment throughout their working lives, as job opportunities for the least skilled continue to deteriorate.\”

At the higher levels of education, US college completion rates are on the rise, but not as quickly as in many other countries. Blank writes: 

While the U.S. population has shown relatively slow growth in the share of the population with a college degree, other countries have made very rapid progress on this front in recent decades. As a result, while this country had one of the most educated populations in 1970, other countries are rapidly surpassing the United States in educational attainment. In 2011, the United States ranked fourteenth among the thirty-six OECD nations in the percentage of 25- to 34-year-olds with associate’s degrees or higher. Even more concerning, this percentage is virtually the same among 25- to 35-year-olds as it is among 55- to 64-year-olds in the United States, while virtually all other countries have seen substantial gains in higher education for the younger age group … 

Blank doesn\’t discuss the work experience of the average US worker, but with the retirement of the baby boom generation subtracting large numbers of high-experience workers from the US economy, this isn\’t likely to be a growth area for human capital, either. 
What about expanded physical capital and innovation? Blank discusses these together, on the grounds that new technologies are one of the main reasons why businesses would expand their physical capital per worker. But for some years now, business investment has been sluggish, in a way that has led some to predict a future of \”secular stagnation\” for the US economy. Overall US government support for research and development, one of the drivers of innovation, has been flat for several decades (as discussed, for example, here and here). 
In short, looking at the basic determinants of economic growth does not paint a pretty picture for long-run economic growth in the US. Thus, the question becomes to what extent at least some of these determinants of growth might be affected by public policy. It\’s easy to list what the targets of such policies might be, although it\’s of course harder to be confident about which specific policies would work in meeting these targets. 
For example,  if job opportunities for low-skilled workers expanded in a way that pulled large numbers of them into the labor force,  or if If a very large number of Americans postponed retirement and continued to work later in life, the number of hours worked in the US economy would not keep falling. US human capital would improve with changes in the K-12 school system that both increased the proportion and the quality of high school graduates, followed by methods of financing more higher education for those for whom acquiring more skills in college makes sense. More government spending on R&D makes sense, but much more important is a business environment that has the incentives and ability to use the results of government R&D–in combination with the firm\’s own innovative efforts–to grow and expand. 
Blank notes: \”Innovation, when it leads to new products that consumers and businesses demand, creates new companies and new jobs. Most job growth comes from rapidly growing new companies that are expanding in high-demand markets. We need that innovation to continue to occur at a high rate in this country if we are to reap these economic benefits.\” She also writes of the need \”to ensure that the United States is an excellent place to start and grow businesses, with modern infrastructure, strong intellectual property protections, a reasonable tax regime, reasonable regulatory structures, and
so forth.\” 
She adds: \”I optimistically note that support for many of these actions should be bipartisan,
although there will be partisan disagreement on how to achieve them.\” I would add that one way to judge candidates for political office in 2016 is whether they have a detailed and at least somewhat plausible plan–not just a slogan or an expression of good intentions–for improving the main determinants of economic growth. 

What is Getting Too Little Attention from Financial Regulators?

\”The mission of US Commodity Futures Trading Commission,\” as its website notes, \”is to foster open, transparent, competitive, and financially sound markets, to avoid systemic risk, and to protect the market users and their funds, consumers, and the public from fraud, manipulation, and abusive practices related to derivatives …\” The CFTC is supposed to operate with five commissioners, but it is currently making do with three. One of them is J. Christopher “Chris” Giancarlo who was nominated by President Obama in 2013 and started his role in 2014. Giancarlo recently participated in the Fidelity Guest Lecture Series on International Finance at Harvard Law School, and in his December 1, 2015, lecture, he expressed his frustration about how the financial regulatory apparatus is still so focused on working through the implementation of the Dodd-Frank law passed five years ago–a law designed to address the problems of the 2007-2009 financial crisis–that too little attention is being given to what are right now the more important challenges of regulatory policy. Giancarlo sets the stage this way:

\”The Dodd-Frank Act was passed over five years ago, but U.S. market participants and Washington financial regulators must still spend much of their professional time arguing over and addressing its myriad mandates and peculiar prescriptions – regulatory edicts ostensibly designed to prevent a recurrence of the last crisis. The same is true for much of the European and Asian discussion around the G-20 regulatory reform efforts initiated in Pittsburgh in 2009 and coordinated by the Financial Stability Board (FSB). The hue and cry of the ongoing financial market reforms under Dodd-Frank and the FSB leaves market regulators and participants with very little available bandwidth to assess and prepare for the next financial crisis – a crisis that will certainly be unlike the last one.

Just as “peacetime generals are always fighting the last war” and “economists fight the last depression,” so too do financial regulators outlaw past market abuses that are not a looming threat to our financial markets and economies. The Dodd-Frank Act and its unceasing implementation are uniquely positioned to ensure U.S. market regulators stay focused on the past.

Allow me to use a simple analogy. U.S. market regulators are riding together in an automobile on a high-speed interstate highway. The Dodd-Frank Act is an oversized rear-view mirror covering almost the entire windshield. That rear-view mirror directs our attention to the enormous amount of rules and requirements generated over the past five years that need to be or reworked to meet Dodd-Frank’s never-ending demands. Meanwhile, financial markets continue to evolve and pass by at remarkable speed. New dangers are coming right at us. As we regulators barrel down the road of 21st century financial markets, we must shed this backwards-looking approach to regulating or we will not be able to see the oncoming traffic and looming dangers ahead.\”

What dangers does Giancarlo believe, looking around around from his perch at the CFTC, should be the main focus of financial regulators right now? Here are his six priorities–and the Dodd-Frank legislation has very little to say about any of them: 
1) Cybersecurity. Both Giancarlo and CFTC chair Timothy Massad are on record as saying that \”cybersecurity is the most important single issue facing our markets today in terms of market integrity and financial stability.\”

2) Disruptive Technology. Regulators need to figure out how to deal with issues like automated electronic trading. Giancarlo writes: \”It is hard to deny that finance is increasingly becoming an industry where machines and humans are swapping their dominant roles – transforming modern finance into what scholar Tom Lin has called `cyborg finance.\’\” Other new technologies include \”distributed open ledger\” systems in which records of financial transactions are held openly by many parties, as in the case of Bitcoin and a number of efforts by private-sector banks. and the development of \”financial cartography,\” by which he means maps of how financial networks interact.

3) Government intervention. Here, Giancarlo is referring to the very large role that the Federal Reserve and other central banks have come to play in financial markets. He writes (footnotes omitted):

\”Since the 2008 financial crisis, the Federal Reserve (Fed) has made itself an increasingly outsized player in the U.S. government debt markets … Through its “quantitative easing” (QE) program, the Fed has purchased an unprecedented 61 percent of all Treasuries issued, peaking at close to 80 percent in 2014. Today, the Fed has become the multi-trillion dollar “Washington Whale.” Its intervention in the Treasury and mortgage-backed security markets misprices the true cost of credit below its natural level and distorts the integrity of prices and exchange rates. The Fed is having an increasingly direct and immediate impact on all other markets, from corporate bonds to equities and foreign exchange rates to developing nations’ sovereign debt. It has reduced the heterogeneity of the investor base, herding it into one-way bets on anticipated changes in Fed policy rather than traditional fundamental credit or value analysis. …. Central banks have replaced major dealers and money center banks as marketplace Leviathans plunging into increasingly shallower pools of trading liquidity. With one flip of their policy tails, these central bank behemoths can whack a whole lot of smaller market participants out of once-liquid markets and leave them stranded.

4) Market illiquidity. Here, the main concern is that in writing rules to limit what banks and financial institutions are allowed to do, we may be contributing inadvertently to market that are less liquid and thus more prone to episodes of high volatility or even manipulation. Giancarlo said (footnotes omitted):

Market participants know that liquidity is the lifeblood of healthy trading markets. In essence, liquidity is the degree to which a financial instrument may be easily bought or sold with minimal price disturbance by ready and willing buyers and sellers. …  

We saw evidence of such pronounced liquidity contraction this past August in enormously volatile equity markets, when major global banks focused on executing trades for their clients rather than for their own account.We saw it in June with sudden spikes in the German Bond market.We saw it a year ago when the market for U.S. Treasury securities, futures and other closely related financial markets experienced an unusually high level of volatility and a very rapid and pronounced round-trip. A few weeks ago, Chairman Massad cited new CFTC research showing that “flash” volatility spikes have become increasingly common, with 35 spike events so far this year in core futures products such as corn, gold, WTI crude oil, E-Mini S&P and Euro FX.

Traditionally, large global money center banks served to reduce such market volatility by buying and selling reserves of securities and other financial instruments to take advantage of short-term anomalies in market prices. Their balance sheets served as market “shock absorbers” in times of market turbulence. … According to one senior banker, “Wall Street’s role as an intermediary and risk taker has shrunk.” This evolution appears to have been underway for some time.

A major catalyst of the reduced bank trading liquidity in financial markets is the new regulatory policies of U.S. and overseas bank prudential regulators imposed in the wake of the financial crisis. … Most of the new regulations have the effect of reducing the ability of medium and large financial institutions to deploy capital in trading markets. Combined, these disparate regulations are already sapping global markets of enormous amounts of trading liquidity. …In trying to stamp out risk, global regulators are instead harming trading liquidity. … We need to understand the full implications of constrained bank capital on market health and resiliency and the ability of financial markets to underpin sorely needed global economic growth. 

5) Market concentration. Giancarlo writes: 

\”A wave of consolidation is taking place across the financial landscape, concentrating the provision of essential market services within fewer and fewer institutions. It is now widely recognized that Dodd-Frank regulations have wiped out small community banks across America’s agriculture landscape. It is less well-acknowledged that large banks are broadly reducing market services, jettisoning less-profitable clients and increasing some fees on others in such critical areas as prime brokerage and administrative services. A similar narrowing of market services is taking place in the swaps market, where rising regulatory costs are driving consolidation of transaction service providers into a few remaining major SEFs [Swap Execution Facilities]. This wave of consolidation is perhaps most glaringly apparent in the case of America’s futures commission merchants (FCMs).\” 

6) De-globalization. Here, the concern is that because of regulatory differences across countries, global pools of capital are being splintered and rearranged to sidestep regulations–a game that often does not end well. Giancarlo writes: 

\”Traditionally, users of swaps products chose to do business with global financial institutions based on factors such as quality of service, product expertise, financial resources and professional relationship. Now, those criteria are secondary to the question of the institution’s regulatory profile. Overseas market participants are avoiding financial firms bearing the scarlet letters of “U.S. person” in certain swaps products to steer clear of the CFTC’s problematic regulations. As a result, non-U.S. market participants’ efforts to escape the CFTC’s flawed swaps trading rules are fragmenting global swaps trading and driving global capital away from U.S. markets. … According to a survey conducted by the International Swaps and Derivatives Association (ISDA), the market for euro interest-rate swaps (IRS) has effectively split. Volumes between European and U.S. dealers have declined 55 percent since the introduction of the U.S. SEF [Swap Exchange Facility] regime. The average cross-border volume of euro IRS transacted between European and U.S. dealers as a percentage of total euro IRS volume was twenty-five percent before the CFTC put its SEF regime in place and has fallen to just ten percent since.

Fragmentation has exacerbated the already inherent challenge in swaps trading – adequate liquidity – and is increasing market fragility as a result. Fragmentation has led to smaller, disconnected liquidity pools and less efficient and more volatile pricing. Divided markets are more brittle, with shallower liquidity, posing a risk of failure in times of economic stress or crisis. Fragmentation has increased firms’ operational risks as they structure themselves to avoid U.S. rules and manage multiple liquidity pools in different jurisdictions …\”  

I\’m not sure that Giancarlo\’s six priorities are the right ones. Some seem to me more important than others, and in particular, I don\’t know much detail about the regulation of the swaps market. But I find it easy to believe that while politicians and regulators are refighting the battles of 2008–and in particular, how to reduce the risk of future bailouts by the US Treasury or the Federal Reserve–we are giving insufficient thought to other issues of financial regulation that should at least be on the radar screen in 2016.