Associational Life in America

The concept of \”social capital\” is slippery to measure or analyze, but the OECD, for example, defines it as “networks together with shared norms, values and understandings that facilitate co-operation within or among groups.” The great economist Kenneth Arrow wrote in a 1972 essay on \”Gifts and Exchanges\” (Philosophy & Public Affairs, 1:4 (Summer 1972), pp. 343-362): 

\”Many of us consider it possible that the process of exchange requires or at least is greatly facilitated by the presence of several of these virtues (not only truth, but also trust, loyalty, and justice in future dealings). Now virtue may not always be its own reward, but in any case it is not usually bought and paid for at market rates. In short, the supply of a commodity in many respects complementary to those usually thought of as economic goods is not itself accomplished in the marketplace … Virtually every commercial transaction has within itself an element of trust, certainly any transaction conducted over a period of time. It can be plausibly argued that much of the economic backwardness in the world can be explained by the lack of mutual confidence … \”

Any discussion of American networks soon starts quoting the French writer Alexis de Tocqueville,who noted an oddity about Americans in his classic Democracy in America (published in two volumes in 1835 and 1840): Americans had a predilection what he called associations. He wrote:

\”Americans of all ages, all conditions, and all dispositions, constantly form associations. They have not only commercial and manufacturing companies, in which all take part, but associations of a thousand other kinds—religious, moral, serious, futile, extensive, or restricted, enormous or diminutive. The Americans make associations to give entertainments, to found establishments for education, to build inns, to construct churches, to diffuse books, to send missionaries to the antipodes; and in this manner they found hospitals, prisons, and schools. … Nothing, in my opinion, is more deserving of our attention than the intellectual and moral associations of America. The political and industrial associations of that country strike us forcibly; but the others elude our observation, or if we discover them, we understand them imperfectly, because we have hardly ever seen anything of the kind. It must, however, be acknowledged that they are as necessary to the American people as the former, and perhaps more so. In democratic countries the science of association is the mother of science; the progress of all the rest depends upon the progress it has made.\”  

But while the idea that social capital is important is far from new, it\’s nonetheless interesting that the staff of the Joint Economic Committee of Congress has just published a report \”What We Do Together: The State of Associational Life in America  (Social Capital Project Report 01-17, May 2017). It\’s studded with quotations from authors like Tocqueville and Robert \”Bowling Alone\” Putnam.

Before listing some facts about changes in what the report calls \”associational life\” in America, it\’s perhaps useful for me to confess that while I can easily believe that social capital is generally important, the specific processes by which it is created and reinforced are not clear to me. It would be peculiar and anachronistic to yearn after the good old days of 1840. If the people of 1840 had radio, television, and the internet, not to mention the ability to hop in a car or a plane and travel, then the \”associations\” observed by Tocqueville would have looked rather different. The question of what social institutions are most useful to inculcate the growth of a citizens with a predisposition to cooperation and trust is a vast ocean, and I am just a guy in dinghy, padding around with one small oar.

In other words, I think it\’s important and useful to think about how associational life is shifting: that is, who we rely on, who relies on us, and how we interact with other citizens in a variety of cross-cutting forums. It does feel as if we are, along a number of dimensions, living in a lower-trust time. But whether or how to address these changes is beyond my remit. Here are some of the patterns mentioned in the Congressional report, which include associations related to family, religion, community, politics, and work.  For those with a taste for Tocqueville, I\’ll include some additional context from Democracy and America below:

Some changes in family associations

  •  Between 1973 and 2016, the percentage of Americans age 18-64 who lived with a relative declined from 92 percent to 79 percent. The decline was driven by a dramatic 21-point drop in the percentage who were living with a spouse, from 71 percent to 50 percent.
  • In 1970, there were 76.5 marriages per 1,000 unmarried women aged 15 and older. As of 2015, that rate had declined by more than half to 32 per thousand. 
  • In 1970, 56 percent of American families included at least one child, but by 2016 just 42 percent did. The average family with children had 2.3 children in 1970 but just 1.9 in 2016. Among all families—with or without children—the average number of children per family has dropped from 1.3 to 0.8. 

Some changes in religious associations

  • In the early 1970s, nearly seven in ten adults in America were still members of a church or synagogue. While fewer Americans attended religious service regularly, 50 to 57 percent did so at least once per month. Today, just 55 percent of adults are members of a church or synagogue, while just 42 to 44 percent attend religious service at least monthly. 
  • In the early 1970s, 98 percent of adults had been raised in a religion, and just 5 percent reported no religious preference. Today, however, the share of adults who report having been raised in a religion is down to 91 percent, and 18 to 22 percent of adults report no religious preference. 

Some changes in community associations

  • Between 1974 and 2016, the percent of adults who said they spend a social evening with a neighbor at least several times a week fell from 30 percent to 19 percent.
  • Between 1970 and the early 2010s, the share of families in large metropolitan areas who lived in middle-income neighborhoods declined from 65 percent to 40 percent. Over that same time period the share of families living in poor neighborhoods rose from 19 percent to 30 percent, and those living in affluent neighborhoods rose from 17 percent to 30 percent.
  • Between 1972 and 2016, the share of adults who thought most people could be trusted declined from 46 percent to 31 percent. Between 1974 and 2016, the number of Americans expressing a great deal or fair amount of trust in the judgement of the American people “under our democratic system about the issues facing our country” fell from 83 percent to 56 percent.
  • Between 1974 and 2015, the share of adults that did any volunteering who reported volunteering for at least 100 hours increased from 28 percent to 34 percent.

Some changes in political associations

• Between 1972 and 2012, the share of the voting-age population that was registered to vote fell from 72 percent to 65 percent, and the trend was similar for the nonpresidential election years of 1974 and 2014. Correspondingly, between 1972 and 2012, voting rates fell from 63 percent to 57 percent (and fell from 1974 to 2014).
• Between 1972 and 2008, the share of people saying they follow “what’s going on in government and public affairs” declined from 36 percent to 26 percent.
• Between 1972 and 2012, the share of Americans who tried to persuade someone else to vote a particular way increased from 32 percent to 40 percent.

Some changes in work associations

 • Between the mid-1970s and 2012, the average amount of time Americans between the ages of 25 and 54 spent with their coworkers outside the workplace fell from about two-and-a-half hours to just under one hour.
• Work has become rarer, in particular, among men with less education. From the mid-1970s to 2012, hours at work fell by just 2 percent among men with a college degree or an advanced degree, compared with 14 percent among those with no more than a high school education.
• Between 1995 and 2015, workers in “alternative work arrangements” (e.g., temp jobs, independent contracting, etc.) grew from 9 percent to 16 percent of the workforce.
• Since 2004, median job tenure has been higher than its 1973 level, indicating that workers are staying in their jobs longer than in the past.
• Between 1970 and 2015, union membership declined from about 27 percent to 11 percent of all wage and salary workers.

For those who need a larger dose of Tocqueville, here\’s a more extended quotation from his discussion of Americans and \”associations\” in Book II, Chapters 5-7 of Democracy in America 
(the Project Gutenberg edition). Toward the end of the excerpt, in particular, Tocqueville argues that freedom of association in political, civic, and economic interactions is all intertwined: when people learn how to form associations, it spreads across these varied contexts.

\”The political associations which exist in the United States are only a single feature in the midst of the immense assemblage of associations in that country. Americans of all ages, all conditions, and all dispositions, constantly form associations. They have not only commercial and manufacturing companies, in which all take part, but associations of a thousand other kinds—religious, moral, serious, futile, extensive, or restricted, enormous or diminutive. The Americans make associations to give entertainments, to found establishments for education, to build inns, to construct churches, to diffuse books, to send missionaries to the antipodes; and in this manner they found hospitals, prisons, and schools. If it be proposed to advance some truth, or to foster some feeling by the encouragement of a great example, they form a society. Wherever, at the head of some new undertaking, you see the government in France, or a man of rank in England, in the United States you will be sure to find an association. … The English often perform great things singly; whereas the Americans form associations for the smallest undertakings. It is evident that the former people consider association as a powerful means of action, but the latter seem to regard it as the only means they have of acting. …

Aristocratic communities always contain, amongst a multitude of persons who by themselves are powerless, a small number of powerful and wealthy citizens, each of whom can achieve great undertakings single-handed. In aristocratic societies men do not need to combine in order to act, because they are strongly held together. Every wealthy and powerful citizen constitutes the head of a permanent and compulsory association, composed of all those who are dependent upon him, or whom he makes subservient to the execution of his designs. Amongst democratic nations, on the contrary, all the citizens are independent and feeble; they can do hardly anything by themselves, and none of them can oblige his fellow-men to lend him their assistance. They all, therefore, fall into a state of incapacity, if they do not learn voluntarily to help each other. If men living in democratic countries had no right and no inclination to associate for political purposes, their independence would be in great jeopardy; but they might long preserve their wealth and their cultivation: whereas if they never acquired the habit of forming associations in ordinary life, civilization itself would be endangered. … 

As soon as several of the inhabitants of the United States have taken up an opinion or a feeling which they wish to promote in the world, they look out for mutual assistance; and as soon as they have found each other out, they combine. From that moment they are no longer isolated men, but a power seen from afar, whose actions serve for an example, and whose language is listened to. The first time I heard in the United States that 100,000 men had bound themselves publicly to abstain from spirituous liquors, it appeared to me more like a joke than a serious engagement; and I did not at once perceive why these temperate citizens could not content themselves with drinking water by their own firesides. I at last understood that 300,000 Americans, alarmed by the progress of drunkenness around them, had made up their minds to patronize temperance. …

Nothing, in my opinion, is more deserving of our attention than the intellectual and moral associations of America. The political and industrial associations of that country strike us forcibly; but the others elude our observation, or if we discover them, we understand them imperfectly, because we have hardly ever seen anything of the kind. It must, however, be acknowledged that they are as necessary to the American people as the former, and perhaps more so. In democratic countries the science of association is the mother of science; the progress of all the rest depends upon the progress it has made. …

In order that an association amongst a democratic people should have any power, it must be a numerous body. The persons of whom it is composed are therefore scattered over a wide extent, and each of them is detained in the place of his domicile by the narrowness of his income, or by the small unremitting exertions by which he earns it. Means then must be found to converse every day without seeing each other, and to take steps in common without having met. Thus hardly any democratic association can do without newspapers. There is consequently a necessary connection between public associations and newspapers: newspapers make associations, and associations make newspapers; and if it has been correctly advanced that associations will increase in number as the conditions of men become more equal, it is not less certain that the number of newspapers increases in proportion to that of associations. Thus it is in America that we find at the same time the greatest number of associations and of newspapers. …

There is only one country on the face of the earth where the citizens enjoy unlimited freedom of association for political purposes. This same country is the only one in the world where the continual exercise of the right of association has been introduced into civil life, and where all the advantages which civilization can confer are procured by means of it. …  Civil associations, therefore, facilitate political association: but, on the other hand, political association singularly strengthens and improves associations for civil purposes. In civil life every man may, strictly speaking, fancy that he can provide for his own wants; in politics, he can fancy no such thing. When a people, then, have any knowledge of public life, the notion of association, and the wish to coalesce, present themselves every day to the minds of the whole community: whatever natural repugnance may restrain men from acting in concert, they will always be ready to combine for the sake of a party. Thus political life makes the love and practice of association more general; it imparts a desire of union, and teaches the means of combination to numbers of men who would have always lived apart. … 

Men can embark in few civil partnerships without risking a portion of their possessions; this is the case with all manufacturing and trading companies. When men are as yet but little versed in the art of association, and are unacquainted with its principal rules, they are afraid, when first they combine in this manner, of buying their experience dear. They therefore prefer depriving themselves of a powerful instrument of success to running the risks which attend the use of it. They are, however, less reluctant to join political associations, which appear to them to be without danger, because they adventure no money in them. But they cannot belong to these associations for any length of time without finding out how order is maintained amongst a large number of men, and by what contrivance they are made to advance, harmoniously and methodically, to the same object. Thus they learn to surrender their own will to that of all the rest, and to make their own exertions subordinate to the common impulse—things which it is not less necessary to know in civil than in political associations. Political associations may therefore be considered as large free schools, where all the members of the community go to learn the general theory of association. …

It is therefore chimerical to suppose that the spirit of association, when it is repressed on some one point, will nevertheless display the same vigor on all others; and that if men be allowed to prosecute certain undertakings in common, that is quite enough for them eagerly to set about them. When the members of a community are allowed and accustomed to combine for all purposes, they will combine as readily for the lesser as for the more important ones; but if they are only allowed to combine for small affairs, they will be neither inclined nor able to effect it. It is in vain that you will leave them entirely free to prosecute their business on joint-stock account: they will hardly care to avail themselves of the rights you have granted to them; and, after having exhausted your strength in vain efforts to put down prohibited associations, you will be surprised that you cannot persuade men to form the associations you encourage. … 

When you see the Americans freely and constantly forming associations for the purpose of promoting some political principle, of raising one man to the head of affairs, or of wresting power from another, you have some difficulty in understanding that men so independent do not constantly fall into the abuse of freedom. If, on the other hand, you survey the infinite number of trading companies which are in operation in the United States, and perceive that the Americans are on every side unceasingly engaged in the execution of important and difficult plans, which the slightest revolution would throw into confusion, you will readily comprehend why people so well employed are by no means tempted to perturb the State, nor to destroy that public tranquillity by which they all profit. … 

Is it enough to observe these things separately, or should we not discover the hidden tie which connects them? In their political associations, the Americans of all conditions, minds, and ages, daily acquire a general taste for association, and grow accustomed to the use of it. There they meet together in large numbers, they converse, they listen to each other, and they are mutually stimulated to all sorts of undertakings. They afterwards transfer to civil life the notions they have thus acquired, and make them subservient to a thousand purposes. Thus it is by the enjoyment of a dangerous freedom that the Americans learn the art of rendering the dangers of freedom less formidable.

The Market for US Prescription Drugs

In 2015, the US spent $328 billion on retail drugs, and another $129 billion on \”non-retail\” drugs,  which are the drugs purchased by hospitals, nursing homes, and other health care providers and added to your bill. The operation of  the market for prescription drugs is a tangle, in ways that suggest competition is often being hindered–or even throttled.

Matan C. Dabora, Namrata Turaga, and Kevin A. Schulman provide a useful diagram summarizing the US prescription drug market in their article, \”Financing and Distribution of Pharmaceuticals
in the United States,\’ which appears in the May 15, 2017 issue of the Journal of the American Medical Association (pp. E1-E2).

The manufacturers of prescription drugs are at the center top of the figure. The drugs themselves work down the left-hand-side of the figure, through distributors and retailers, before reaching the patients. The various arrows in the center and right of the diagram show flows of payments, including AMP (Average Manufacturer Price), WAC (Wholesale Acquisition Cost), and then a maze of chargebacks, negotiated rebates, and payments from patients and private and public health insurance, often mediated through \”pharmacy benefit managers.\”

In their short comment, Dabora, Turaga, and Schulman point out that there is a fairly high amount of concentration at a number of places in this market schematic (footnotes omitted):

\”The US distributor market is highly consolidated, with 3 companies accounting for more than 85% of market share: AmerisourceBergen, Cardinal Health, and McKesson.The estimated combined revenues from drug distribution for these 3 firms in 2015 was $378 billion. … 

\”In 2015, an estimated 4.4 billion drug prescriptions were dispensed in the United States … There are approximately 60 000 pharmacies in the United States, of which 38 000 are part of retail chains and 22 000 are independent pharmacies. The retail pharmacy market can be divided into 3 major categories: chain pharmacies and mass merchants with pharmacies, independent pharmacies, and mail-order pharmacies. The 15 largest firms, including CVS, Walgreens, Express Scripts, and Walmart, generated more than $270 billion in revenue in 2015 through retail and mail-order pharmacy, representing approximately 74% of retail prescription revenues.

\”PBMs [pharmacy benefit managers] developed in the 1980s as employers added outpatient prescription drug coverage to their health insurance plans. By 2015, industry consolidation had resulted in 3 PBMs—CVS Caremark, Express Scripts, and UnitedHealth’s Optum—controlling a 73% share of the PBM market.

\”Health insurance generally includes prescription drug insurance in both public and private health insurance plans. In 2015, 42%of prescription drug spending was from private health insurance, 30% from Medicare, 10% from Medicaid, and 14% from private out-of-pocket payments.

\”In addition to the usual product discounts and allowances for product returns, manufacturers provide a series of cash payments to health plans, PBMs, and distributors in the form of rebates and chargebacks as a result of complex pricing arrangements across the industry. The end result of these complex transactions is that in 2015, $115 billion, or 27% of total pharmaceutical sales,was paid by manufacturers to various entities throughout the drug distribution and financing systems.

Aaron S. Kesselheim, Jerry Avorn, and Ameet Sarpatwari provided an overview of the research literature in \”The High Cost of Prescription Drugs in the United States Origins and Prospects for Reform.\” which appeared in the Journal of the American Medical Association late last summer in the August 23/30, 2016, issue (pp. 859-871). They start with the basic facts that Americans spend more on prescription drugs that people in other countries, and that a number of popular brand-name drugs cost a lot more in the US than in other countries. Here\’s a figure on per capita spending on prescription drugs: 
On the topic of drug prices across countries, they write: \”List prices for the top 20 highest-revenue-grossing drugswere on average 3 times greater in the United States than the United Kingdom. These disparities are reduced but remain substantial even after accounting for undisclosed discounts (“rebates”) that manufacturers offer to US payers. In 2010, estimated average postrebate prices for medications were 10% to 15% higher in the United States than in Canada, France,
and Germany (Table 1).\”
Kesselheim, Avorn, and Sarpatwari sort through the research literature, looking for a potential reasons for these high levels of drug prices and drug spending. My own list of some of the reasons from their article would look like this: 
1) Prices are rising for brand-name drugs, and competition between brand-name drugs doesn\’t seem to bring down prices. 

\”Although brand-name drugs comprise only 10% of all dispensed prescriptions in the United States, they account for 72% of drug spending. Between 2008 and 2015, prices for the most commonly used brand-name drugs increased 164%, far in excess of the consumer price index (12%). The annual cost of a growing number of “specialty drugs”—high-cost, often injectable biologic medications such as eculizumab (Soliris), pralatrexate (Folotyn), and elosulfase alfa (Vimizim)—exceeds $250 000 per patient. …

\”In practice, however, competition between 2 or more brandname manufacturers selling drugs in the same class does not usually result in substantial price reductions. For example, of the 8 cholesterol-lowering statins that the FDA has approved, 2 have until recently remained patented: rosuvastatin (Crestor) and pitavastatin (Livalo). Despite the similar performance of these drugs in decreasing low-density lipoprotein cholesterol to other off-patent statins, the price of rosuvastatin increased 91% between 2007 and 2012, from $112 to $214 per prescription.  During the same time, the price of the comparably effective atorvastatin decreased from $127 to $26 per prescription owing to the expiration of its patent protection in 2011. Similar effects have been observed for other drug classes.\”

2) While competition from generic drugs often does help to bring down prices, that competition faces a number of limits. Brand-name manufacturers often find ways to push back competition from generics, and when a generic for a relatively rare condition has a monopoly, the price for the generic skyrockets, too. 

\”The only form of competition that consistently and substantially decreases prescription drug prices occurs with the availability of generic drugs,which emerge after the monopoly period ends.With FDA approval, these products can be substituted for bioequivalent brand-name drugs by the pharmacist under state drug product selection laws.In states with less restrictive drug product selection laws, generic products comprise up to 90% of a drug’s sales within a year after full generic entry. Drug prices decline to approximately 55% of brand-name drug prices with 2 generic manufacturers making the product, 33% with 5 manufacturers, and 13% with 15
manufacturers. In 2012, the US Government Accountability Office estimated that generic drugs accounted for approximately 86% of all filled prescriptions and saved the US health care system $1 trillion during the previous decade. …

\”Entry of generic drugs into the market, however, is often delayed. For pharmaceutical manufacturers, “product life-cycle management” involves preventing generic competition and maintaining high prices by extending a drug’s market exclusivity. This can be achieved by obtaining additional patents on other aspects of a drug, including its coating, salt moiety, formulation, and method of administration. … For their part, generic manufacturers have engaged in litigation with brand-name manufacturers that could lead to the patents being invalidated, but these suits are frequently settled. Historically, brand-name manufacturers have offered substantial financial inducements as part of these settlements to generic manufacturers to delay or even abort generic introduction. Settlements involving large cash transfers are called “pay for delay”; for example, in a patent challenge case related to the antibiotic ciprofloxacin (Cipro), the potential generic manufacturer received upfront and quarterly payments totaling $398 million as part of the settlement and agreed to wait until patent expiration to market its product.

\”Although brand-name drugs account for the greatest increase in prescription drug expenditures, another area that has captured the attention of the public and of policy makers has been the sharp increase in the costs of some older generic drugs. In 2015, Turing Pharmaceuticals raised the price of pyrimethamine (Daraprim), a 63-year-old treatment for toxoplasmosis, by 5500%, from $13.50 to $750 a pill. The company was able to set the high price despite the absence of any patent protection because no other competing manufacturer was licensed to market the drug in the United States.
Significant increases in the prices of other older drugs include isoproterenol (2500%), nitroprusside (1700%), and digoxin (637%). Even though the prices of most generic drug products have remained
stable between 2008 and 2015, those of almost 400 (approximately 2% of the sample investigated) increased by more than 1000%. …

3) The big government purchasers of drugs, Medicare and Medicaid face legislative limits in encouraging or requiring the purchase of cheaper drugs or generic drugs.

\”Medicare, for example, accounts for 29% of the nation’s prescription drug expenditure, but federal law prevents it from leveraging its considerable purchasing power to secure lower drug prices while requiring it to provide broad coverage, including all products in some therapeutic categories, such as oncology. Based in part on considerable lobbying and arguments that government negotiating power could decrease revenues for the pharmaceutical industry, Congress included a provision in the law that created the Medicare drug benefit program, prohibiting the Centers for Medicare & Medicaid Services from negotiating drug prices or from interfering with negotiations between individual Part D vendors and drug companies. …

\”Similarly, state Medicaid programs are generally required by law to cover all FDA-approved drugs, even if a particular medication has alternatives that are safer, are more effective, or offer greater economic value. However, Medicaid is also entitled to receive a rebate of at least 23.1%of the average manufacturer price for most branded medications and is protected from price increases exceeding inflation.

4) Prescription benefit managers are typically paid according to the total revenues of the drugs they manage, and thus lack a strong incentive to negotiate for lower prices. 

\”In the 1990s, prescription benefit management companies became prominent intermediaries whose role would be to help employers or insurers promote appropriate prescription drug use and decrease its cost. There have been some recent isolated examples in which pharmacy benefit managers have doneso for specific drugs (most prominently for drugs treating hepatitis C or the pro-protein convertase subtilisin/kexin type 9 inhibitors to reduce cholesterol levels). However, aggressive price negotiation is not the norm. This is not surprising because part of pharmacy benefit managers’ annual fees are based on a given payer’s spending on drugs. Although the details of such payments are rarely disclosed, when one of the largest pharmacy benefit managers became a publicly traded entity, it was obliged to disclose its business model, much of which depended on payments from drug makers for shifting market share to their products from others in its class.\”

5) State-level  laws also tend to protect brand-name drugs by hindering competition from generics. 

\”Notwithstanding high generic drug use rates, problems at the state level can diminish the capacity of generic drugs to help contain costs. Thirty states have drug product selection laws that allow but do not require pharmacists to perform generic substitution; in 26 states, pharmacists must secure patient consent before substituting a generic version of the same molecule. The latter obligation was estimated to have cost Medicaid $19.8 million in 2006 for simvastatin (Zocor) alone. In addition, all states allow physicians to issue dispense-as-written prescriptions that pharmacists cannot substitute with a generic product, further contributing to hundreds of millions of dollars in spending on branded drugs for which generic versions are available.\”

6) Large self-insured employers have traditionally felt that the potential cost savings from negotiating hard over drug prices, or pushing for alternative and cheaper drugs, wasn\’t worth the risk of bad public relations episode.

\”Even large, self-insured employers have avoided aggressive attempts to negotiate prices directly with drug suppliers or to curtail their formularies to avoid paying for prescriptions that are less cost effective.  A common reason for this reluctance is that because pharmacy benefits have traditionally comprised less than 15% of health
care budgets, the organizational concern that could be caused by denying payment to an employee or retiree for a particular drug was  seen as overwhelming the modest savings that could be realized. This may change as drug prices increase, particularly for widely used products, and as drug spending consumes a greater share of health budgets.\”

Overall, the consequences of this lack of competition contribute to high and rising prescription drug prices. One tradeoff is less money in household and government budgets to spend on other priorities. Another tradeoff is that people facing high drug costs become less likely to take the drugs on time and in full, which leads to preventable health problems.

There is also a potential tradeoff between cheaper drugs today and incentives for innovation leading to the new and improved drugs for the future. There are a variety of ways to provide additional incentives for innovation, including more government support or tax incentives for R&D, and reform of the Food and Drug Administration protocols so that testing and bringing a new drug to market is not so difficult and costly. In comparison, having drug companies that seek out generic drugs where they can be the sole producer and then jack up the price doesn\’t seem an especially useful incentive.  
There\’s an solid economic case for patents and intellectual property, which offer some protection from competition, but whether it\’s drugs or some other product, the case for patents doesn\’t imply that the remaining competitive forces should be stripped out of broad areas of the market.

State and Local Government Business Incentives: Data Tells a Story

When a state or local government offers a tax incentive to a business for locating or expanding in its jurisdiction, cross-cutting motives are at work. For the business, it\’s a chance to get a tax break–maybe for a business decision that would have happened anyway. For the government, it\’s a chance to show that it\’s \”doing something\” to help the economy and to claim credit for the location or growth of certain businesses–even if those are business decisions that would have happened anyway. The issue of the extent to which tax incentives actually alter business decisions or help a local economy overall is difficult to sort out, but for any social scientist, the starting point is to have some actual data.

Timothy J. Bartik has compiled \”A New Panel Database on Business Incentives for Economic Development Offered by State and Local Governments in the United States\” (Upjohn Institute, February 2017). For a short overview of this work, see Bartik\’s article \”Better Incentives Data Can Inform both Research and Policy\” in the Upjohn Institute Employment Research newsletter for April 2017. As Bartik writes in the newsletter:

\”Using data from 1990 to 2015, the “Panel Database on Incentives and Taxes” estimates marginal business taxes and business incentives for 45 industries in 33 states; the industries compose 91 percent of U.S. labor compensation, and the states produce over 92 percent of U.S. economic output. The database has data for a new facility starting up in each of 26 “start years.” Compared to prior studies, the new database provides more incentive details, such as how incentives are broken down by different incentive types (e.g., job creation tax credits vs. property tax abatements), and the time pattern by which incentives are paid out over a facility’s life cycle.\” 

For some questions, it can be hard to extract an answer from data, and involved a lot of assumptions and calculations. But for some questions, the data comes close to telling the story. 

For example, are state and local tax incentives for business rising or falling in the last 25 years? Here\’s a figure. Here\’s a time trend in business incentives, expressed as a percentage of the gross stat and local taxes paid by business. Clearly, the level is dramatically higher than 25 years ago. Bartik notes that a big reason for the jump around the year 2000 is the expansion of the \”Empire Zone\” incentives in New York. 
Of course, this overall increase conceals differences across states.

\”From 2001 to 2007, big increases in incentives occurred in New Mexico, Missouri, Indiana, North Carolina, Nebraska, and Texas. This includes some Southern and Midwest states plus the Great Plains state of Nebraska and the southwestern state of New Mexico. Over this same time period, New York significantly reduced incentives, from 5.79 percent to 5.20 percent, although New York incentives remained high. From 2007 to 2015, big incentive increases occurred in Tennessee, New Jersey, Wisconsin, Minnesota, Colorado, Oregon, and Arizona. What is noteworthy here is that some states that previously had very low incentives, such as Tennessee, Wisconsin, Minnesota, Colorado, and Oregon, began to use incentives at a much higher level. But over this same time period, big decreases in incentives occurred in New York, Michigan, and Missouri. The big decreases in New York were due to the demise of the Empire Zone program. In Michigan, Governor Rick Snyder jettisoned the expensive MEGA incentive program as part of a policy package that rolled back general business taxes. The Missouri decrease is due to the Quality Jobs program (a JCTC) being replaced with a less costly job creation tax credit, Missouri Works.\”

Looking at the data also can raise some basic questions about the structure of these incentives, both in terms of the form in which the incentives are being provided and in terms of the extent to which such incentives are front-loaded.

Here\’s the issue with front-loading incentives. A business thinking about making a location or an expansion decision is focused on the relatively short-term. Bartik cites an estimate that businesses often discount revenues earned off in the future by about 12% per year. As a result, a tax break that is 10 or 20 years in the future decisions has relatively little effect on their decision. However, for the government, that tax break is very likely to end up being cashed in–at least if the business continues over that time.

Thus, if a state or local tax incentive is spread over a long period of time, the government is effectively offering something of real cost (the tax break), which doesn\’t affect the business decision all that much. If business incentives are to be offered at all, it makes more sense to front-load them. The figure shows the pattern of tax incentives over tine in Oregon (black line), which heavily front-loads incentives, and in Tennessee (green line), which doesn\’t front-load much. The blue line shows the overall pattern in the data in 2000, and the red line shows the data pattern for 2015–showing that front-loading has become just a little more common in the last 15 years.

Bartik writes: \”As of 2015, in the average U.S. state, incentives are substantially front-loaded. This front-loading has increased over time, but front-loading is arguably not as great as it should be. And some states tend to provide very long-term incentives that probably do not have large payoffs in swaying business location decisions.\”

What about the form in which incentives are given? The data from the survey reveals: 

\”[O]f the total cost of incentives in the average state in 2015, the largest incentive type was JCTCs [job creation tax credits], which were almost half of total incentive  costs (44.9 percent). Another quarter of incentive costs (27.4 percent) were due to property tax abatements. The remaining three incentive types (ITCs [investment tax credits], R&D tax credits, customized job training) together constituted about one-quarter of incentive costs. …

\”Over time, how has the use of different incentive types changed? … [A]veraged across all states by far the biggest change is that JCTCs have gone from virtually nothing in 1990 to 45 percent of all incentives today, or from 0.01 percent of value-added to 0.64 percent of value-added. Of the 0.96 percentage point increase in overall incentives from 1990 to 2015 (from 0.46 percent to 1.42 percent), about 0.63 percent is due to increased JCTC usage, or about two-thirds of the total incentive increase. …

\”Although nationally JCTCs were most important, some states whose overall incentives were above the national average had little or no JCTCs: Alabama, DC, Iowa, Kentucky, and Pennsylvania. Property tax abatements were particularly important (greater than 1.30 percent of value-added versus the national average of 0.39 percent) in DC, Michigan, New Mexico, Pennsylvania, and Tennessee. Investment tax credits were particularly large (greater than 0.9 percent of value-added versus the national average of 0.20 percent) in Alabama, Kentucky, Nebraska, and South Carolina. R&D tax credits are usually not very large, but in some states with overall low incentives (less than 0.55 percent in overall incentives as percent of value-added), R&D tax credits were a large share of what incentives are provided: California, Maryland, Massachusetts. Customized job training is also usually not a very large incentive, but it was well above the national average (greater than 0.30 percent versus national average of 0.07 percent) in New Mexico, Iowa, and Missouri.\”

Which approach is most likely to be effective? Bartik is suitably cautious here, and careful to label various results as preliminary. But he does write in the newsletter (citations omitted): 

\”Incentives designed as customized services may be more effective than tax incentives. For example, customized job training is a very effective incentive. Research suggests that, per dollar, customized job training might be 10 times more effective than tax incentives in encouraging local business growth. Other effective customized services include manufacturing extension programs, which have been shown to improve productivity. Why are such customized services more cost-effective? They tend to be more targeted than tax incentives at small and medium-sized businesses, whose location and expansion behavior is easier to affect than large businesses’. Because obtaining quality job training services or business advice may be difficult for smaller businesses to do on their own, the value of such services may exceed their costs. Finally, customized services provide up-front assistance, helping the business be more productive immediately. However, despite the greater cost-effectiveness of customized services, state and local incentives are more focused on tax incentives. For example, the typical state only spends $1 on customized job training for every $20 devoted to tax incentives.\”

What about the hardest question of all–that is, in how many cases does the incentive actually alter the choice a business makes, rather than just adding a little gravy to the choice that would have been made anyway? Appropriately hedged, Bartik writes:

The database suggests incentive effects toward the low end of prior estimates: the average incentive package, 1.4 percent of value-added, might tip the location decision of 6 percent of incented businesses— the other 94 percent of the time, the state would have experienced similar growth without the incentive. This typical 6 percent tip rate of incentives is a low batting average. To have benefits greater than costs, incentives must do something special— they either need unusually high benefits per job created, or incentive designs must exceed the typical batting average, lowering costs per job created. …

Although the rate of growth of incentives has slowed down, it is still more likely that states will significantly increase rather than decrease incentives. New states have entered more vigorously into the incentive competition. Incentives are still far too broadly provided to many firms that do not pay high wages, do not provide many jobs, and are unlikely to have research spinoffs. Too many incentives excessively sacrifice the long-term tax base of state and local economies. Too many incentives are refundable and without real budget limits. States devote relatively few resources to incentives that are services, such as customized job training. Based on past research, such services may be more cost-effective than cash in encouraging local job growth.

So if your state or local government is considering incentive, the first step is to think  a second, and then a third time, and perhaps even a fourth time about whether it actually makes sense. If the decision is to proceed, then the discussion should move to issues of design: for example, a front-loaded payment that supports customized training for a high tech firm that has a good chance of  generating local spinoffs has a better chance of being a good deal for the taxpayers and the local economy than a 20 year tax break.

The Economics of the \’Stans

The semi-official name for the region is \”Central Asia,\” but I confess that I usually think of it as \”the \’Stans\”–that is, Kazakhstan, Kyrgyzstan, Tajikistan, Turkmenistan and Uzbekistan. These countries have now been independent of the former Soviet Union for 25 years. Taken together, they have nearly 70 million people, comparable to the population of the United Kingdom or France. As someone who is driven by curiosity and guilt, it bothers me that I do not know more about this region. Thus, I was delighted to find that  Uuriintuya Batsaikhan and Marek Dabrowski have written a primer on the economics of this area, \”Central Asia at 25\” (Bruegel Policy Contribution #13, May 2017).

From a broad-brush perspective, some of the economic experience of this region looks similar to other \”transition economies\” that separated from the Soviet Union: that is, they first experienced several years of severe recession and hyperinflation, but then adjusted and have performed somewhat better. For illustration, here\’s a graph showing growth rates for these five economies in the last 25 years.

And here\’s a figure showing inflation rates over that time. Notice that the left-hand axis measures inflation on a logarithmic scale, so the upper numbers on the left-hand side represent inflation in the range of several hundred percent per years.

The economic and political legacy of these countries as they gained autonomy from the Soviet Union also has some similarities. Batsaikhan and Dabrowski write:

Similarly to other parts of the Soviet Union, agriculture was forcibly collectivised in the early 1930s. The human costs of the Soviet modernisation of Central Asia were enormous. They included several rounds of famine in the 1920s and 1930s, repression and terror in the 1930s, the building of a large network of labour camps (the so-called Gulag system) where political opponents from the entire Soviet Union were imprisoned and where they perished in large numbers, and mass-scale resettlements (ssylka in Russian) of populations from the European part of the Soviet Union. The latter affected social groups such as the kulaks (better-off farmers) and included the deportation of entire ethnic groups in the 1940s, including Volga Germans, Chechens, Ingush, Crimean Tatars, Crimean and Caucasian Greeks, Meskhetian Turks, Koreans, Karachays and Poles.
After the death of Stalin in 1953 and the partial dismantling of the Gulag system, the Soviet-type forcible modernisation and industrialisation continued but with the use of less coercive methods. These included the conversion of pastures (‘virgin land’ or tselina in Russian) in northern Kazakhstan into large-scale wheat farms, the building of the Main Turkmen and Karakum canals, and the operation of the Semipalatinsk Nuclear Test Site and the Baikonur Cosmodrome (both in Kazakhstan). Many of those projects caused severe environmental damage (such as the disappearance of the Aral Sea and radioactive pollution over large areas of Kazakhstan), which have not been overcome yet. …
The rapid and forcible industrialisation of the Soviet era (with a strong focus on military needs) was associated with huge structural distortions and microeconomic ineffectiveness. After the dissolution of the Soviet Union, many industrial enterprises in Central Asia lost their previous markets and were unable to compete under the new market conditions. Partial de-industrialisation in the post-Soviet period was thus no surprise. After a painful transition period, growth picked up in 2000s, largely driven by growing exports of commodities such as oil and natural gas (Kazakhstan, Turkmenistan and Uzbekistan), aluminium (Tajikistan), gold (Kyrgyzstan), cotton (Tajikistan and Uzbekistan) and other metals (Kazakhstan).

Although the nations of Central Asia have some similarity in their economic challenges, there are notable differences, too. At the simplest level, they vary considerably in population: \”Uzbekistan is the most populous country with 31.3 million people, followed by Kazakhstan, 17.5 million, Kyrgyzstan, 6 million, Tajikistan, 8.5 million and the least populated Turkmenistan, 5.4 million.\”

Moreover, the standard of living in these countries varies considerably. The figure shows GDP per capita since independence. The solid lines are the five countries of Central Asia: the dashed lines are some nearby comparison economies. The figure shows that Kazakhstan and Turkmenistan have seen substantial economic growth in the last couple of decades, while Kyrgyzstan, Tajikistan, and Uzbekistan have not.

Geopolitically, these countries sit between Russia and China. (To help remain oriented, the three countries to the west of the Caspian Sea, unlabelled on this map, are on map, are Georgia, Armenia, and Azerbaijan.)

As the map illustrates, \”the region is distant from the major centres of world economic activity: North America, Western Europe, East and South East Asia. … [A]ll countries are landlocked (Kazakhstan is the largest landlocked country in the world and Uzbekistan is double landlocked, ie it borders only landlocked countries) with limited transportation connections inside and outside the region.\”

(Trivia question: Other than Uzbekistan, onlyy one other country is double-landlocked. It\’s Liechtenstein, sitting between Switzerland and Austria).

The map also shows the main energy and transportation infrastructure that is being built across the region, much of which connects the region with China. Here\’s a figure showing trade with China and Russia: both are up a lot in the last 15 years or so, but trade with China seems to be taking the lead. As one example:  \”Turkmenistan’s exports to China constituted 1 percent of its total exports in 2009, increasing to almost 80 percent by 2015, almost all of which is natural gas; Turkmenistan’s second largest trading partner, Turkey, takes only 5 percent of its total exports.\”

But while trade with China has been on the rise, and the infrastructure projects will likely keep it on the rise, the three lagging economies of the region are especially dependent on remittances from workers who have taken jobs abroad. Most of these jobs are in Russia, presumably because of the long-standing historical ties, but other substantial destinations for migrant workers from this region include Turkey and relatively well-off Kazakhstan.

Overall, what does the reform agenda look like in Central Asia?

All Central Asian countries are doing poorly in the areas of governance and enterprise restructuring and competition policy, pointing to their limited progress in more complex institutional and legal reforms.  … Corruption remains a major problem in the region, particularly in Turkmenistan and Uzbekistan …. The largely authoritarian character of the political systems in Central Asia is the main cause of their poor governance and business climate, and their insecure property rights and rule-of-law deficit. According to the Freedom House Freedom in the World 2017 report, only Kyrgyzstan is rated as ‘partly free’, while the others are ‘not free’. Uzbekistan and Turkmenistan belong to the group of the 10 most politically oppressive countries in the world, alongside North Korea and Eritrea. …

The list of required reform measures differs between countries but also contains a
common agenda for the entire region. Turkmenistan and Uzbekistan must complete basic market reforms: domestic price liberalisation, reducing explicit subsidies for food, energy and water, and cross-subsidisation (in public utilities), unification of exchange rate and current account convertibility, trade liberalisation, WTO accession, greater privatisation and elimination of barriers to private entrepreneurship, both domestic and foreign, and building financial market infrastructures.

On the other hand, all Central Asian countries, Kazakhstan and Kyrgyzstan where reforms are more advanced, face the same challenges of oppressive and predatory post-Soviet states. These are deeply rooted corruption, rent seeking, state capture, administrative harassment of business and more broadly, a high degree of business uncertainty and insecurity over property rights. The situation looks particularly bad in all areas where economic management interacts with authoritarian political systems and legal institutions, especially those related to the judiciary, law enforcement agencies and public administration. … Closer intra-regional cooperation would also improve the business and investment climate. Given the region’s remote geographical location, its complicated borders, infrastructure inherited from Soviet times and cultural proximity, unrestricted movement of goods, services, people and capital between Central Asian countries would greatly contribute to their economic development. Closer cooperation would also help Central Asian countries to jointly promote their interests vis à vis those of their major neighbours.

More Fish Through Less Fishing

The way to have a higher sustainable catch of fish, long-term, is to have a lower catch of fish, short-term. This insight isn\’t all that paradoxical or surprising. Fisheries are a standard example for economists of what can go wrong when property rights are not clear. If the level of fishing is low, then the resource can replenish itself naturally, and specific rules or property rights are not needed. But when the level of fishing becomes high enough that the resource cannot replenish itself, each individual fishing-boat retains an incentive to keep going out there and getting what it can–after all, if one boat doesn\’t, another boat will. The predictable results is that the fishery becomes depleted.

Ragnar Arnason, Mimako Kobayashi, and Charlotte de Fontaubert discuss the situation in The Sunken Billions Revisited: Progress and Challenges in Global Marine Fisheries, published by the World Bank (February 2017).

As they point out, the last quarter-century or so has seen dramatically more effort to catch fish, with the actual catch stagnant or declining. They write of a \”long-term decline in fish stocks, stagnant or even slightly declining catches since the early 1990s, and an increase in the level of fishing by a factor of as much as four. The productivity of global fisheries decreased tremendously, as evidenced by the fact that catches did not increase nearly as rapidly as the global level of effort (apparent in a doubling of the size of the global fleet and a tripling of the number of fishers).\”

Here\’s a figure summarizing the extent to which fisheries around the world are underfished, fully fished, or overfished.

The report is largely a discussion of a model of global fisheries and what it would take to rebuild them. The authors write:

Severely overexploited fish stocks have to be rebuilt over time if the optimal equilibrium is to be reached and the sunken billions recovered. To allow biological processes to reverse the decline in fish stocks, fishing mortality needs to be reduced, which can only happen through an absolute reduction in the global fishing effort (as captured by the size and efficiency of the global fleet, usually measured in terms of the number of vessels, vessel tonnage, engine power, vessel length, gear, fishing methods, and technical efficiency). Reducing the fishing effort in the short term would represent an investment in increased fishing harvests in the longer term. Allowing natural biological processes to reverse the decline in fish stocks would likely lead to the following economic benefits:

• The biomass of fish in the ocean would increase by a factor of 2.7.
• Annual harvests would increase by 13 percent.
• Unit fish prices would rise by up to 24 percent, thanks to the recovery of higher-value species, the depletion of which is particularly severe.
• The annual net benefits accruing to the fisheries sector would increase by a factor of almost 30, from $3 billion to $86 billion.

This study looks at two hypothetical pathways that would allow fish stocks to recover. At one extreme, if the fishing effort were reduced to zero for the first several years and then held at an optimal level, global stocks could quickly recover to over 600 million tons in 5 years and then taper off toward an ideal level. Reducing the global fishing effort by 5 percent a year for 10 years would allow global stocks to reach this ideal level in about 30 years …

This report makes a very clear case for the need for reform. It does not analyze policies, financing, or the socioeconomic impacts of embarking on such reform. Many case studies have shown that different strategies are called for in different circumstances. Whichever strategies are chosen, fishing capacity will have to be reduced, jeopardizing the livelihoods of millions of fishers. Financing will be needed to fund the development of alternatives for them, to provide technical assistance at all levels, and to conduct additional research on ecosystem changes and related ecological processes.

Later in the paper, they do offer a quick discussion of some policy issues. One minimal step would be to cut back on the policies that subsidize fishing.

However, global subsidies are significant: some recent studies estimate global fisheries subsidies at US$35 billion a year, around US$20 billion of which are provided in forms that tend to further increase fishing  capacity. This subsidy level is equivalent to as much as one-third of the value of global fisheries production. 

Most fisheries are in national waters, and thus the rules are ultimately set and enforced by nations. I\’ve discussed in the past \”Saving Global Fisheries Through Property Rights\” (April 12, 2016). Such rights can be created through \”catch shares\” for individuals or through some form of community-based management. Either way, the challenge is to have someone managing fisheries for the long run–which in many places, means reducing the current catch.

Phelps on Dynamism vs. Corporatism

Edmund Phelps won the Nobel Prize back in 2006 for work that \”[d]eepened our understanding of the relation between short-run and long-run effects of macroeconomic policy.\” But for the last 10-15 years, he has focused on articulating a broader philosophy and economics of capitalism. His thinking in this area is expressed in his 2014 book, Mass Flourishing: How Grassroots Innovation Created Jobs, Challenge, and Change.

But for those looking for a combination of summary of some main themes of the book along with additional thoughts, Phelps has also just written an article, \”The Dynamism of Nations:Toward a Theory of Indigenous Innovation Edmund Phelps,\” in Capitalism and Society (2017, 2: 1, Article #3).

My own gloss on Phelps\’s argument would go something like this: When standard economics (going back to Schumpeter) considers economic growth, it often follows a cookbook approach in which science creates ideas, engineers apply ideas, and then a business supplied with appropriately skilled workers makes and them. In this vision, most workers are cogs in this machine, motivated solely by their wages and their prospects for consumption of goods and services.

Phelps pushes back against what I\’m calling the standard perspective in a number ways. He places less emphasis on the role of science and new discovery for innovation, but correspondingly more emphasis on \”business knowledge\” or \”commercial knowledge\” in creating innovations about what to produce and how to produce it. He argues for the importance of having large swaths of the workforce actively involving themselves in this process of business discovery and innovation. He argues on economic grounds that this broader involvement is important for innovation, and on philosophical grounds that many people find a deeper life satisfaction from involvement in creativity and a career.
Finally, he argues that societies provide a more or less hospitable climate to this broader involvement in innovation, in the balance that is struck in any society between the value placed on creativity, innovation, and disruption, and the value placed on material well-being, predictability, and security. He argues that American society has become in a broad sense less hospitable to innovation, at least in most of the industries outside information technology.

Obviously, it\’s hard to summarize an argument of this breadth in a compact way. But here are a few comments from the article to give a sense of his argument.

On the importance of innovations that are not a direct result of scientific breakthroughs, but rather of business knowledge:

\”An encyclopedia of major innovations must be full of new products and methods not triggered by – or even linked to – any particular scientific advance: Fire, wheel, writing, paper, the Egyptians’ steam power, Gutenberg’s printing press, Whitney’s cotton gin, Waltham’s interchangeable parts, Deere’s moldboard plow, Lille’s chlorination of drinking water, Singer’s sewing machine, Pasteur’s persuading surgeons to wash their hands, Edison’s lightbulb and phonograph, Nightingale’s hospital reorganization to contain disease, the Lumière brothers’ commercial films, Marconi’s radio, Sarnoff ’s radio network, Birdseye’s frozen foods, Farnsworth’s television, IBM’s computer (aimed at businesses), Malcolm McLean’s 1956 containerization, Nat Taylor’s 1957 multiplex cinema, Ted Turner’s 24-hour news program, Howard Schultz’s Starbucks, Marc Andreessen’s web browser, etc. Fracking, the latest of the big innovations, depended on the expertise that engineers gained from experience, not some exogenous advance of science. Of course, the total impact of the millions of unrecognized innovations might well exceed that of the many thousands of well-recognized ones such as those just mentioned. 

\”We can see how Schumpeter went wrong. Science is not the sole source of all knowledge. While advancing science may be expanding potential knowledge of production possibilities, science does not tell us whether there will be a market for any of the new possibilities; business knowledge is indispensable here. And while the level of general scientific knowledge in the world may well have contributed to innovations achieved in modern economies, the outpouring of new products and methods from the 1820s to the 1960s in some countries may not have resulted from scientific discoveries more than from myriad business discoveries – discoveries made in the tests and try-outs of new business ideas. It is not established that scientific knowledge, S, grew faster than business, or commercial, knowledge, C, or that growth of the former is more effective in bringing economic advance than growth of the latter. (Furthermore, the cumulative level of business knowledge – industry expertise and related know-how – is surely more voluminous.)\”

Here\’s Phelps on the importance of a large number of people with creativity and ingenuity for the process of innovation, and the importance of social support for what Phelps calls \”dynamism\” rather than for \”corporatism\” or \”solidarity.

\”The modern capitalist system offers the latitude to innovate, the capacity to do so and, above all, the desire. For high dynamism, a nation – its families, communities and public offices – must give individuals and their companies the latitude and support they need if they are to attempt and to achieve innovation. There is little leeway for innovation if society is unwilling to put up with the “creative destruction” – even the mild disruption or inconvenience – that may accompany it. And there is wide latitude for the innovator where mayors and other public officials are eager to facilitate start-ups and help them with their development. Patent trolls and a climate of litigation pose daunting hazards for start-ups aiming to innovate. Corporatism comes in here: A dogma of a corporatist society is “solidarity.” It calls for providing “social protection” of the myriad interest groups in the economy. So the government might defend the workers or investors in industries by regulating entry with the purpose of barring outsiders with new ideas. In some industries, companies may operate a cartel that removes incentives to innovate in order to gain market share. Solidarity also requires the gains of enterprises, whether from a change of market conditions or a successful innovation, to be shared with so-called stakeholders. So any enterprise contemplating an attempt at innovation would expect that a substantial profit would be largely turned over to the community or to the state. …

At the heart of a nation’s system for high dynamism are people with the desire or occasional urge to innovate. Some may have motivations found among entrepreneurs, such as a need to succeed or to strike it rich. Some others, however, may want to make a difference or show they can go their own way. Some are driven by a curiosity to see whether their insights prove right. Still others are motivated by a desire to give something to their community or society. (Obviously the latter attitudes or traits are not the work-and-save mentality of mercantile capitalism.) I would add that, although a person’s desire to innovate may be inborn to a degree, it can be boosted by supportive attitudes of parents and teachers; and it can be repressed by unwelcoming or hostile attitudes toward creativity or novelty. Furthermore, businesspeople will have more desire to innovate in a nation that admires such ventures and provides workforces that will be engaged in the project and want to contribute. These same desires can also be inhibited by repressive attitudes in families and communities. The empirical effects of attitudes and traits on the dynamism of nations have been the subject of much recent research. The paper I presented at the 2006 Conference of the Center on Capitalism and Society in Venice tested the statistical significance of several attitudes reported in the World Values Surveys, and it presented estimates of the efficacy of these attitudes. Economies exhibit better performance in nations in which more people regard work as important to them, want to have some initiative at work, seek jobs that are interesting, express acceptance of competition, and prefer “new ideas” to old ones. These results, which do not address innovation in particular, at least leave open the possibility that innovation is affected by these attitudes. A subsequent study by Gylfi Zoega also using WVS data found that the possession of a good “work ethic”, initiative, and trust of others raises job satisfaction; and these attitudes also affect a nation’s unemployment and labor force participation. Finally, in a study with Raicho Bojilov in 2012, I found that job satisfaction is higher – very likely because innovation is stronger or more widespread – in nations where more people think it is fair to pay more to the more productive, agree that the direction of firms is best left to the owners and feel that new ideas may be worth developing and testing. …

As one might anticipate from the tone and tenor of this argument, Phelps criticizes an excess of regulations often passed in the name of social protection for hindering creativity. But he saves some of his strongest language for what he calls \”private sector decadence\” and \”corporatism.\”

Some of the most serious faults of the once-dynamic economies lie in the private sector. A degree of corruption has seeped into some private institutions. The institution known as corporate governance is suspect. Most attempts at innovation are long-term projects shrouded in mystery, yet CEOs lean toward short-termism, aiming to maximize their bonuses and golden parachute by extracting every last gain in efficiency. This short-termism reduces the supply of innovation – the innovatorship, risk-capital, and venturesome end-users that innovation requires. CEOs in established companies make as few attempts at innovation as possible – explaining that there have been no “opportunities” to innovate. Financial people want to be paid on the basis of current profits, with little to no claw-back. The pressure is on corporations to meet quarterly earnings targets so as not to jeopardize hoped-for capital gains on the stock. A characteristic of established and even accomplished corporations is that they are unable to go beyond a careful concern for efficiency, which demonstrates to the corporate board and shareowners their zeal. … 

In all the discussion of reform, however, it is supposed that it is the “economy” that needs fixing – that the spirit of the modern economy and the values that inspire it remain strong: America remains at heart a nation of pioneers and innovators, Europe the home of mythic explorers and profound discoverers. But the “spirit” is a key part of an economy – the heart of it. The corruption of government and of corporations is not simply the ineluctable consequence of self-interest. People’s self-interests depend on their values. The transmutation of the state and the corporate sector is a result of a resurgence of the traditional values that we call corporatist values, which counter the influence of modern ones. At precisely this level of values, the rise of corporatism has transformed the functioning of the once-modern economies. …

The corporatism that was resurgent at the turn of the century – the doctrine of Ferdinand Tönnies, George Valois and Benito Mussolini, to name a few – disapproved of disorder, especially the topsy-turvy disorder that came with innovations and adaptations. Corporatism disapproved of those with the ambition to get rich, calling them “money-grubbers,” and hated the “new money” that displaced established wealth. It disapproved of competition, preferring instead the concerted action of society as a whole through the government. Most fundamentally, corporatism was an attack on individualism, calling for a state that would bring harmony and nationalism in place of the individual’s autonomy to take initiative and to innovate.  In the corporatist state, everyone is to go on working, accumulating wealth and managing companies – and all that is seen to be for the good of the social body. But no one is permitted to hire the nation’s labor and borrow its wealth to embark on a venture aimed at adventure, discovery and personal growth! What matters is society’s power to achieve material gains – public consumption, private consumption and leisure. Thus the rebirth of corporatism was a reaction against the modernism that was the root source of the spirit of the modern economy.

By now, corporatism is pervasive in all the nations of the West. Corporatism is behind the metastasis of vested interests, clientelism and cronyism that has brought a welter of regulations, grants, loans, guarantees, deductions, carve-outs, and evergreen patents mainly to serve vested interests, political clients, and cronies. In recent decades, large banks, large companies, and large government agencies formed a nexus to pump up home mortgage debt in America and to create unchecked sovereign debt and unfunded entitlements in several nations in Europe. America has joined Europe in having a parallel economy that draws its nourishment from the ideas of political elites, whatever their motives, rather than from new commercial ideas. All this has combined to choke off much innovation.

Corporatist thinking is behind various developments in the private sector. With the rise of stakeholders, anyone deciding to start an innovative company would have to expect that its property rights would be diluted as it copes with an array of figures – its own workforce, interest groups, advocates, and community representatives – who ardently believe they have a legitimate “stake” in the company’s results. Many employees feel they have the right to hold on to their jobs – no matter that many others would do the job for far less money – so long as they add something to profit or the company makes a profit from other divisions that can cover the loss.

As a reader of Phelps\’s essay (and the earlier book), there are plenty of places where I feel some urge to push back against his argument.  But it also seems to me that he is trying to enunciate something important. As he notes, standard economics has not produced a fully persuasive answer as to why productivity growth slowed across high-income countries in the early 1970s, or  why male labor force participation started a steady decline at about the same time. There are also plenty of times and places where governments in certain countries liberalized certain laws or opened an industry to competition, but the resulting changes were paltry or nonexistent. The fact base and logical linkages here can feel nebulous and elusive. But in a broad sense, it seems plausible to me that a society can offer a broader climate that is more or less supportive of innovation and change, and in turn, that this climate that this can have deeper effects on growth, work satisfaction, and the efficacy of economic policy.

Economic Prospects for the West Bank and Gaza

Even for the most narrow focus and tunnel-vision of economists, it would of course be ludicrous to claim that economic issues are foremost among the disputations surrounding the Palestinian territories of the West Bank and the Gaza Strip. But the economic issues are far from nugatory, either. When you have a population with a per capita GDP below $2,000, when unemployment among young men aged 15-24 is often near 40%, when wages have been falling for a decade, and when foreign aid has recently fallen from one-third of GDP to 6% of GDP, then the economic situation is going to be part of what is generating turmoil and discontent. 

Jacob Udell and Glenn Yago provide a readable overview of the topic in \”Still Digging Out: The Economics of a Palestinian Future,\” which appears in the most recent issue of the Milken Institute Review (Second Quarter 2017, 19:2, pp. 78-85). For additional background, the World Bank has published the \”Economic Monitoring Report to the Ad Hoc Liaison Committee\” (May 4, 2017), and the IMF gas has published \”West Bank and Gaza: Report to the Ad Hoc Liason Committee\” (April 10, 2017). Here, I\’ll draw on all three report.

Udell and Yago describe the problems of the economy of the West Bank and Gaza in some detail. Here\’s a trimmed-down  version of a table from their article (I\’m showing three-year intervals, rather than every year).

For example, Udell and Yago write:

\”Though the labor force participation rate is currently at its highest since 2000 (at an unimpressive 46 percent), it has been accompanied by an overall spike in unemployment — implying that the net entry of job seekers into the market exceeds the ability of the economy to create employment. Meanwhile, the Palestinian Authority has also become the employer of last resort, with 23 percent of the workforce on its rolls. The wave of youth entering the labor market in the past decade, coupled with the frictional and structural unemployment of the adult population and almost nonexistent job growth, has left youth unemployment at alarming levels. Since 2001, for instance, unemployment among males aged 15–24, which seems to function as a leading indicator of civil unrest, has averaged 35 to 40 percent and reached 43 percent in 2014.\”

\”Since 2005, real average wages have decreased by some 10 percent, while unemployment remains at around one-quarter of the labor force, and average GDP growth lags behind population growth by 2.6 percent per year.\”

\”And while considerable sums flow into the territories from overseas Palestinians, there are no “diaspora bonds” or other vehicles to facilitate investment by Palestinian ex-pats (whose wealth estimates by the World Bank have varied from $40 billion to $80 billion). One mark of a lack of confidence in the economy: Palestinian investment abroad in 2015 was $5.9 billion — $1.3 billion more than foreign investment in Palestine.\”

Along similar lines, the World Bank report notes:

\”Currently, only 40 percent of those aged between 15 and 29 are active in the labor market, reflecting high pessimism regarding employment prospects. Despite a low participation rate, unemployment amongst this category reached 27 percent in 2016 in the West Bank and a staggering 56 percent in Gaza. … In the medium term, real GDP growth is projected to hover around 3.3 percent. This growth level implies a stagnation in real per capita income and an increase in unemployment. Moreover, downside risks remain significant. As mentioned earlier, the [government budget] financing gap for 2017 is unprecedented in terms of its size, and risks significant economic and social consequences if it is not closed through additional finance or policy measures.\”

Of course, everyone agrees that a peaceful resolution of all political differences would be a boost for the local economy. But setting aside those issues, what might the shape of a healthier Palestinian economy look like? The World Bank report notes: 

\”For such a small economy, achieving a sustainable growth path depends to a large extent on its capacity to compete in regional and global markets and increase its exports of goods and services. The Palestinian economy, however, has been losing this capacity as a result of a poor business climate mainly driven by externally-imposed restrictions on trade and access to resources in addition to the lack of political stability. In fact, the structure of the economy has substantially deteriorated since the 1990’s. For instance, the manufacturing sector, which is usually one of the key drivers of export-led growth, has largely stagnated and its share in GDP has dropped from 19 percent in 1994 to 11 percent in 2015. The share of the agriculture sector has also declined from 12 to 4 percent over the same period. In relative terms, most growth occurred in public sector services over the past two decades. Private investment levels, averaging about 15 percent of GDP in recent years, have been low and concentrated in low productivity activities less affected by political risk. Palestinian exports are focused largely on low value added products and services and their share in the economy has been low and stagnant at 17-18 percent. The substantial amounts of financial assistance from the international community received over the last two decades have so far helped mitigate the impact of the restrictions on growth, but aid has significantly declined in recent years (from 32 percent of GDP in 2008 to about 6 percent of GDP in 2016) and cannot continue to substitute for a poor business environment.\”

Similarly, the IMF report notes that the structure of the Palestinian economy has been shifting away from agriculture and manufacturing:

\”For example, agriculture and manufacturing together accounted for almost 33 percent of GDP in 1995, but their share had almost halved to 17 percent in 2015. At the same time, other sectors that might compensate as an economy develops (e.g., services) did not gain ground. While trade is one exception, the large structural trade deficit weighs negatively on the capacity of industry to create jobs.\”

The economy of the West Bank and Gaza is extraordinarily dependent on remittances–that is, on money sent back by Palestinians working elsewhere. The World Bank report notes: 

\”The Palestinian territories rank as the second largest source of international migrants in relation to its population in the world, after Syria. The magnitude and importance of remittance inflows to Palestine are undeniable. According to World Bank estimates, remittance inflows to the Palestinian territories were USD2.2 billion in 2015. This estimate does not include compensation of Palestinian employees working in Israel, which, according to the PMA [Palestine Monetary Authority], are an additional the USD1.2 billion. Remittance inflows are also significant when considered relative to the size of the economy and other important financial flows. Inward remittances are about 17 percent of GDP, making the Palestinian territories one of the top 20 most remittance-dependent countries in the world. If compensation of Palestinian workers in Israel is included, the inward remittance flows rise to 26 percent of GDP. Further, inward remittances are twice as large as exports and comparable to aid including transfers to non-governmental organizations. Remittances exceed Foreign Direct Investment by a factor of 10 to 15.\”

Udell and Yago describe the interdependence of the economy of the West Bank and Gaza with that of Israel:

\”The Palestinian territories’ trade deficit is also a product of dependence on Israel and a lack of diversification of its trade partners. Israel is the biggest market by far for Palestinian goods, accounting for some 85 percent of Palestinian exports, which highlights the lack of Palestinian business development in the markets of Europe and the rest of the Middle East. Israel is also the territories’ major supplier, accounting for 60 percent of total imports.\”

What are some potential areas on which economic growth in the West Bank and Gaza might build? One possibility is to seek to build ties of trade and migrant workers with Arab countries across the Middle East. Another is to emphasize better conditions for businesses to operate in the West Bank and Gaza. For example,the World Bank report notes at one point, \”[I]t is vital for the PA to address institutional reform to ensure that energy suppliers are paid for their service – which is critical for both energy imports and investment in generation.\” That advice hits at a deeper problem for any business thinking about operating in the area.

Udell and Yago offer some scenarios and economic possibilities for the West Bank and Gaza that at least have a positive upside. They write:

Like Israel, Palestine lacks natural resources. But it does have a wealthy diaspora, a cultural commitment to education, and strong entrepreneurial and trading traditions vital to a modern, skills-based economy. Palestine could also capitalize on the good will and proximity of the Arab world; if it built efficient capital markets in a politically stable setting, Palestine could become a financial and commercial-services hub for the Arab East. It could also take advantage of historically low interest rates and the ability to leverage bilateral and multilateral guarantees to develop infrastructure in water, alternative energy, environmental protection, tourism, transportation and communications.

Consider, too, that the area has favorable conditions for developing high-value agriculture and agricultural technologies — fruits, vegetables, animals and high-value growing practices. Technology transfers from Israel, a country that has figured out how to grow food and fiber in unlikely places (in an environment similar to those found in Palestinian areas), could sharply improve Palestinian agricultural productivity. Currently, the average agricultural yield in the Palestinian Authority is just half of that in Jordan and 43 percent of that in Israel. The information and communications technologies sector, which has been a bright spot over the past decade, can continue to develop as a key driver of economic growth. … Arab states, along with the United States and Europe, could also offer preferential trade and tariff agreements to kick-start employment and production in special economic zones, as they have done in Jordan and Egypt over the past decade. …

Tourism agreements between Israel and Palestine that make it convenient to visit both Israel and Arab sites in single trips could also play an important role in driving economic development. Tourism, especially high-value-added tourism, is, after all, a labor-intensive industry with great potential for absorbing the large and rapidly growing numbers of unemployed Palestinians. …

Water. A handful of specific river basin projects would have an immediate impact on living standards in Palestinian cities and villages, as well as provide water for higher productivity agriculture and industry.

Energy. Natural gas production, electricity cogeneration and alternative fuels production (solar, biomass renewables) would reduce the need to spend scarce foreign exchange on imports and in some cases have the potential to be highly profitable. These projects would also generate stable, predictable revenues that could be used to leverage added private fund-raising.

Trade, tourism and transportation. Here, we would include regional inter-urban rail, port and, eventually, air facilities, as well as destination tourism at religious, archeological and recreational sites. It could be time to revisit the RAND Corporation’s attempt in the Arc Project to plan infrastructure to support commercial and residential development in an increasingly urbanized country — and offer viable alternatives to continuing life in refugee camps.

Housing construction and finance. The expansion of markets for mortgages would stimulate homeownership and urban revitalization, as well as invigorate focus on green buildings and sustainable housing in this fragile semi-arid environment.

Frankly, some of this reads like pie-in-the-sky advice to me. I have a hard time envisioning that \”Palestine could become a financial and commercial-services hub for the Arab East.\” But the hard reality is that economic future of the West Bank and Gaza can\’t be based on foreign aid checks and government jobs. On the World Bank \”Doing Business\” indicators, the West Bank and Gaza rank 140th, right between Lao PDR and Mali.  Among its neighbors, the West Bank and Gaza ranks ahead of war-torn Syria, Libya, and Iraq, but behind Egypt, Iran, and Jordan. In the Middle Eastern region, Morocco ranks 68th and Tunisia 77th in the Doing Business\” indicators.

Regulating Wall Street: What Needs to Happen Next?

The Wall Street Reform and Consumer Protection Act of 2010–commonly known as the Dodd-Frank act–was a peculiar piece of legislation. It did not directly change financial rules or regulations; instead, it told financial regulators to write new rules in nearly 400 areas. Given that writing a regulation involves a legislatively-mandated process that includes rounds of mandatory feedback and cost-benefit calculations, it\’s eyebrow-raising but not especially shocking that six years after passage of the bill: \”Of the 390 total rulemaking requirements, 274 (70.3%) have been met with finalized rules and rules have been proposed that would meet 36 (9.2%) more. Rules have not yet been proposed to meet 80 (20.5%) rulemaking requirements.\”

As these hundreds of new regulations began to take effect, and to interact with each other and the real world, there was inevitably going to be a need for follow-up legislation. A Republican-backed bill called the Financial CHOICE Act passed through the House Financial Services Committee earlier this week. A group of faculty members at the New York University Stern School of Business and the School of Law have combined to publish an e-book called Regulating Wall Street: CHOICE Act vs. Dodd-Frank, which offers a bunch of readable short essays on these topics.

My overall take is that although Dodd-Frank had some useful steps, it was most usefully interpreted as a cry by Democrats for \”more regulation.\” Conversely, the Financial CHOICE act is essentially pushback by Republicans for \”less regulation.\” Dodd-Frank is full of problems and missed opportunities, but the Financial CHOICE act, even if it is turns out to be amended in sensible ways, would leave behind additional problems while managing miss many of the same opportunities. US financial reform legislation doesn\’t offer much inspiration for ability of US legislators to see the bigger picture.

The \”Introduction\” by Thomas Cooley gives a useful overall perspective on the NYU volume:

The Dodd-Frank Act was not a fully formed set of rules or even a coherent new regulatory architecture for the United States. Rather it was an attempt to create some common mechanisms forcommunication and collaboration within the existing regulatory system through a newly created multi-agency organization—the Financial Stability Oversight Council (FSOC)—and a roadmap for rulemaking to address the obvious flaws in the system. It outlined a path for addressing the flaws in the existing regulatory architecture. The scope of Dodd-Frank is vast, covering everything from consumer financial protection to executive compensation in the financial sector, to the origins of “conflict minerals.” It outlined 390 rulemaking requirements, of which roughly 80% have been met. The resulting increase in regulatory complexity, compliance costs for financial institutions and coordination costs for the regulators has, not surprisingly, led to a backlash against the excesses of the Dodd-Frank regulations. …

Our early assessments of Dodd-Frank found much to criticize in the legislation, but we viewed it as an important step in the direction of making the financial system less risky. It was important because it correctly identified the overarching threat to financial stability and the root cause of the 2008 crisis as the accumulation of systemic risk—risk of collapse because of the interconnected financial risks— in the financial system. An objective of Dodd-Frank was to identify sources of systemic risk, identify systemically risky institutions, establish ways of monitoring systemic risk in the financial system, limit excessive risk-taking by financial institutions, and provide a roadmap for resolving insolvent institutions. To achieve these goals, Dodd-Frank created the FSOC to monitor systemic risk and identify “systemically important financial institutions” (SIFIs). …

With nearly seven years of additional perspective, the weaknesses are clearer. Dodd-Frank missed a golden opportunity to simplify and rationalize the very balkanized U.S. regulatory architecture, where responsibility is spread across many institutions, some with overlapping authority. Dodd-Frank did not sufficiently address the issue of the capital adequacy of financial institutions. Its proposals for the orderly liquidation of insolvent institutions were questionable. The proposed Volcker Rule was complicated and difficult to implement, and it became clear that proprietary trading and investing activities were not at the root of the financial crisis. Dodd-Frank did not address the problems of the Government-Sponsored Enterprises (GSEs) or housing finance. It did not address the problem of pricing government guarantees (deposit insurance, lender of last resort access, too-big-to-fail guarantees). It limited the lender of last resort (LOLR) authority of the Fed, constraining its ability to respond in a crisis. The result of the regulatory reform process that Dodd-Frank initiated, to date, has been a vastly more complicated regulatory structure that many doubt is adequate to forestall the next crisis and that some blame for the demise of many small community banks (institutions that are not viewed as part of the systemic problem) and a decline in bank lending.

The CHOICE act has an attractive conceptual hook: the idea is that if a financial institution shows that it is healthy by having sufficient capital, then it should be exempt from a lot of the Dodd-Frank regulatory apparatus. After all, why micro-regulate a healthy firm? But Cooley argues that the level of capital that the CHOICE act treats as \”healthy\” is far too low and that the CHOICE act goes too far in seeking to eliminate even the idea of \”systemically important financial institutions\” from the law. Cooley writes: 

The CHOICE Act begins with a premise that we endorse: Financial institutions that are well capitalized relative to their risk exposure pose less risk to the financial system and make the possibility of a systemic crisis much smaller. It is widely agreed that the financial system was undercapitalized prior to 2008. But Dodd-Frank did not directly address the idea of ensuring financial stability directly through capital requirements, or at least it did not do it very well.  The CHOICE Act offers a very enticing prospect: Financial institutions that are “well managed and well capitalized—those with a simple leverage ratio of greater than 10%” would be offered an “off-ramp” from the Dodd-Frank regulations.

The CHOICE Act offers an extensive argument in favor of a simple leverage ratio as a measure of capital adequacy and a critique of the Basel risk-based capital approach. We generally support these arguments. The Act also offers a defense of the estimate of 10% as an adequate “safe” level. The essays in this White Paper address this issue in detail. The relevant empirical and quantitative evidence suggest that 10% is at the very low end of what might be an adequate level of capital to forestall a crisis. An indicator of how far off it may be is the “Minneapolis Plan.” This alternative proposal for ending Too-Big-To-Fail—based largely on higher capital cushions—envisions leverage ratios more than twice the CHOICE Act’s 10%.

There is also an issue with how the CHOICE Act measures the leverage ratio. It uses Generally Accepted Accounting Principles (GAAP). Under GAAP, the average leverage ratio of the U.S. globally systemically important banks (G-SIBs) already is 8.24%. But, under International Financial Reporting Standards (IFRS), which do not net out derivative positions but use gross derivatives positions, their average leverage ratio is 5.75%. For systemic risk, the latter measurement system is more appropriate, because netting of offsetting derivatives positions may not be feasible in a crisis.

There is a deeper problem than just having the level of capital wrong. The CHOICE Act does not address the critical issue of what happens to the value of that capital when the economy and capital markets are in distress. It simply fails to recognize the nature and importance of systemic risk. … The CHOICE Act is completely misguided in wanting to eliminate the oversight of systemic risk and the use of stress tests to understand
how capital holds up in a crisis. …

Dodd-Frank responded by limiting the ability of the Fed to use its 13(3) authority, for example by prohibiting loans to individual nonbanks outside of a pre-approved program of broad access. In our earlier books, we expressed concern that this limited the ability of the Fed to respond in a crisis. The CHOICE Act seeks to limit the Fed’s role even further by restricting how the Fed conducts its monetary policy, how it functions as the LOLR [lender of last resort], and how it exercises its regulatory
responsibilities.

Finally, many of the important issues that were largely ignored by Dodd-Frank continue to be ignored by the CHOICE act. Cooley writes:

The CHOICE Act is notable for the issues it did not touch. Like Dodd-Frank, it does not address the problems of the GSEs—Fannie Mae and Freddie Mac—that remain at the heart of the U.S. mortgage market. …

Another important gap is the neglect of the “shadow banking” sector. Neither Dodd-Frank nor the CHOICE Act addresses the systemic risks arising from de facto banking activities per se. But this sector was hugely important in the crisis. The growth of the “shadow banking” system permitted financial institutions to engage in maturity transformation with too little transparency, capital, or oversight. Large, short-term funded, substantially interconnected financial firms came to dominate key credit markets. Huge amounts of risk moved outside the more regulated parts of the banking
system to where it was easier to increase leverage. Legal loopholes allowed large parts of the financial industry to operate without oversight or transparency. Entities that perform the same market functions as banks escaped meaningful regulation solely because of
their corporate form. … The CHOICE Act has nothing to say about this important sector of the financial system.

In an essay later in the book, Kermit L. Schoenholtz spells out the issue of \”Streamlining the Regulatory Apparatus\” in more detail:

The U.S. regulatory system has been characterized as a “Rube Goldberg regulatory framework that is (fortunately) unique to the United States” … At the federal
level, we have three bank regulators (the Federal Deposit Insurance Corporation, the Federal Reserve, and the Office of the Comptroller of the Currency) and two financial market regulators (the Commodity Futures Trading Commission and the Securities and
Exchange Commission), as well as specialized regulators for a range of institutions and activities (including the National Credit Union Administration and the Federal Housing Finance Agency). We also have a college of regulators, the Financial System Oversight Council (FSOC), along with a Federal Insurance Office (FIO) that monitors that sector, and the Consumer Financial Protection Bureau (CFPB). 

But this is only the tip of the regulatory iceberg. Each state has its own banking regulator. The states also have sole authority for the regulation and supervision of insurance and have their own state guarantee funds to backstop insurance contracts. State attorneys
general also occasionally use state laws to impose structural changes in the financial industry (as in New York’s numerous conflict-of-interest suits against securities firms). Finally, on top of the federal and state regulators, there also are the officially authorized self-regulatory organizations, such as the Financial Industry Regulatory Authority and the Municipal Securities Rulemaking Board, along with the numerous finance and real estate industry associations that intensively lobby regulators and legislators alike. …

Yet, despite the biggest financial crisis since the 1930s, the Dodd- Frank Act did almost nothing to simplify the U.S. regulatory structure. … Like Dodd-Frank, the Financial CHOICE Act also fails utterly to simplify this regulatory framework.

Could U.S. regulatory arrangements be radically streamlined, making the system more effective and less wasteful? Undoubtedly. The challenge of doing so is not conceptual, but political. Regardless of which party has majority control, Congress has shown no
inclination over time to simplify the system. A Volcker Alliance background report (Elizabeth F. Brown, “Prior Proposals to Consolidate Federal Financial Regulators”) details more than a dozen proposals since 1960 for consolidating the U.S. regulatory
system.  …

By contrast to the United States, most advanced economies have regulatory systems that are quite simple (see, for example, Elizabeth F. Brown, “Consolidated Financial Regulation: Six National Case Studies and the European Union Experience,” the
Volcker Alliance). As the economy with one of the world’s most competitive financial centers, and one of the world’s largest banking sectors relative to its national income, the United Kingdom provides an important and useful regulatory benchmark for the
United States. The U.K. regulatory system is composed of only three institutions: the Financial Policy Committee (FPC), the Prudential Regulatory Authority (PRA), and the Financial Conduct Authority (FCA). The FPC and the PRA are housed within the Bank of England (BoE). The FPC is responsible for macroprudential policy, while the PRA implements microprudential oversight over depositories, insurers and major investment firms. The FCA, organized outside of the Bank, sets conduct rules for more than 50,000 financial services firms and acts as the prudential regulator for firms not supervised
by the PRA.

Carl F. Christ, 1923-2017

Carl F. Christ (1923-2017) was first trained as a physicist at Colorado College and the University of Chicago. He later wrote: \”Upon graduation in 1943 I went to work in Chicago on the Manhattan Project, the atom bomb project. Information circulated freely in internal seminars for the professional staff, and we soon found out what we were working on. At the time I was living in Concord Co-op House, founded and mainly occupied by pacifist Quakers. After  much thought I decided that I was not a pacifist, and that I would prefer the U.S. rather than Germany to be the first to develop the bomb. (It was known that they were working on it too.).\” But after the war, he decided \”to look for a

social science which I could use my mathematics.\”  He went to the University of Chicago for a PhD in economics, and spent most of his professional career at Johns Hopkins University. A substantial number of students learned their introductory econometrics from Christ\’s textbook, Econometric Models and Methods, first published in 1966. Here\’s an obituary from Johns Hopkins, and here\’s one from the Baltimore Sun.

Back in Spring 1990, Christ wrote a nice biographical essay for The American Economist called \”A Philosophy of Life\” (pp. 33-39, available through JSTOR, and the source of the quotation above).  He gives a sense of academia as it used to be:

\”On the Lake Michigan beach in Michigan, where I spent summers as a child, and still do now, was an elderly gentleman in swim trunks, known to the children as Mr. Knight. I was quite surprised on entering graduate school at Chicago to find him, fully clothed, teaching economics. He had great skepticism of anyone who appeared to know all the answers. He told his classes, \”As soon as a person gets a theory, he\’s lost. …

\”I owe a great debt as well to Leonid Hurwicz, who was not even on my committee. I had presented my paper at the famous Cowles seminar (where only \”clarifying questions\” were allowed until the discussion period arrived, and where \”clarifying questions\” from Arrow, Hurwicz, and Modigliani began to fly as soon as the seminar began). I thought I was finished with my thesis, until Hurwicz invited me to come down to the University of Illinois (where he had just moved) because he had some suggestions for me. I stayed in his World War II prefab hut with him and his family for two days while he went over my thesis with a fine-tooth comb. When I left I was very depressed. But soon I realized that he had done me an enormous favor, and that my thesis was much improved as a result.\”

And here\’s a comment about what econometrics can teach and what economists can know:

\”I used to believe that it was possible to build and estimate an econometric model that would represent an invariant law of economic behavior, valid for many places and for long periods of time. I no longer believe this, because I have yet to see an econometric model that continues to describe new data with no change in its parameters. Instead, I believe that economic reality is so complex that the best we can expect of an econometric model is that it may approximately represent the relations among its variables for a limited place and time. Such an approximation may be very useful, and may permit us to make forecasts for short periods into the future. But until we have much more knowledge about human biology and its relation to economic, social, and political behavior I think we will not achieve econometric models that are invariant over wide reaches of space and time.

\”What then do economists really know? I think we know a great deal. (Remember that this knowledge must be only tentatively accepted.) Most of our knowledge is about equilibrium situations and how they change, rather than about the path followed by the economy on the way to equilibrium. We know some rather simple things that can be stated in nontechnical terms. For example, we know that as incomes rise, a smaller fraction of income is spent for agricultural and extractive products, and a larger fraction for processed goods and for services. We know that increases in a country\’s output per person require either more effort per person, more capital per person, or better productive techniques. We know that price ceilings create shortages, and price floors create unsold surpluses. We know that sustained rapid growth in the stock of money is accompanied by rapid inflation and vice versa. We know that government spending must be financed by some combination of taxation, revenue from sales of product, borrowing from private or foreign sources, issuing high powered money, or depleting stocks of government-held assets. We know that excise taxes, tariffs, and quotas reduce economic welfare in the sense that without them the same total resource pool could produce more satisfaction for some people at no cost to others. We know that permitting individuals to own property and engage in transactions with each other freely will benefit the participants and harm no one, provided that everyone is well informed and acts in his own interest, no one has monopoly power, and there are no external diseconomies such as pollution and no external economies such as increases in the value of my neighbor\’s real estate if I improve mine. (These are very large provisos, and in some situations they are not even approximately satisfied.) We know that a system of private property and free contract leads to an unequal distribution of income and wealth. We know that social or political attempts to equalize the distribution of income and wealth have perverse incentive effects, so that equalization has a cost in the form of a reduction of total output, and we know a good deal about which kinds of policies are the most perverse in this respect (quotas and price controls) and which are the least perverse (income and inhelitance taxes at moderate rates, and good public education and health programs). We know how to look for long term as well as short term effects, and for indirect and well as direct effects. We also know a great many technical theorems that require mathematics to state clearly and to prove. …

Perhaps the way to sum up this philosophy in the fewest possible words is to say, \”Use your head. And use your heart, too.\”

Those interested in digging more specifically into the legacy of Christ\’s work on econometrics might usefully begin with the March-April 1998 issue of the Journal of Econometrics, which is a special issue devoted to \”Studies in Econometrics in Honor of Carl F. Christ.\” The issue isn\’t freely available online, but for a taste, here are the paper title and authors:

  • \”Editor\’s introduction: studies in econometrics in honor of Carl F. Christ,\” by Lawrence R Klein
  • \”Econometric implications of the government budget constraint,\” by Christopher A Sims
  • \”Impulse response and forecast error variance asymptotics in nonstationary VARs,\” by Peter C.B Phillips
  • \”Business cycle analysis without much theory: A look at structural VARs,\” by Thomas F Cooley and Mark Dwyer
  • \”Lending cycles,\” by Patrick K Asea and Brock Blomberg
  • \”Quasi-rational expectations, an alternative to fully rational expectations: An application to US beef cattle supply,\” by Marc Nerlove and Ilaria Fornari
  • \”Identification and Kullback information in the GLSEM,\” by Phoebus J Dhrymes
  • \”The finite sample properties of simultaneous equations\’ estimates and estimators Bayesian and non-Bayesian approaches,\” by Arnold Zellner
  • \”Model specification and endogeneity,\” by Alice Nakamura and Masao Nakamura
  • \”Finite sample moments results for the quasi-FIML estimator of the reduced form: The linear case,\” by Michael D McCarthy
  • \”Nonlinear and non-Gaussian state-space modeling with Monte Carlo simulations,\” by Hisashi Tanizaki and Roberto S Mariano
  • \”Heterogeneous information arrival and option pricing,\” by Patrick K Asea and Mthuli Ncube
  • \”The detection and estimation of long memory in stochastic volatility,\” by F.Jay Breidt, Nuno Crato, and Pedro de Lima
  • \”Rational expectations, inflation and the nominal interest rate,\” by Jean A Crockett

Spring 2017 Journal of Economic Perspectives Available On-line

For the past 30 years, my actual paid job (as opposed to my blogging hobby) has been Managing Editor of the Journal of Economic Perspectives. The journal is published by the American Economic Association, which back in 2011 decided–much to my delight–that the journal would be freely available on-line, from the current issue back to the first issue in 1987. Here, I\’ll start with Table of Contents for the just-released Spring 2017 issue. Below that are abstracts and direct links for all of the papers. I will almost certainly blog about some of the individual papers in the next week or two, as well.

________________________

Symposium: Recent Ideas in Econometrics\”The State of Applied Econometrics: Causality and Policy Evaluation,\” by Susan Athey and Guido W. Imbens

In this paper, we discuss recent developments in econometrics that we view as important for empirical researchers working on policy evaluation questions. We focus on three main areas, in each case, highlighting recommendations for applied work. First, we discuss new research on identification strategies in program evaluation, with particular focus on synthetic control methods, regression discontinuity, external validity, and the causal interpretation of regression methods. Second, we discuss various forms of supplementary analyses, including placebo analyses as well as sensitivity and robustness analyses, intended to make the identification strategies more credible. Third, we discuss some implications of recent advances in machine learning methods for causal effects, including methods to adjust for differences between treated and control units in high-dimensional settings, and methods for identifying and estimating heterogenous treatment effects.
Full-Text Access | Supplementary Materials

\”The Use of Structural Models in Econometrics,\” by Hamish Low and Costas Meghir

This paper discusses the role of structural economic models in empirical analysis and policy design. The central payoff of a structural econometric model is that it allows an empirical researcher to go beyond the conclusions of a more conventional empirical study that provides reduced-form causal relationships. Structural models identify mechanisms that determine outcomes and are designed to analyze counterfactual policies, quantifying impacts on specific outcomes as well as effects in the short and longer run. We start by defining structural models, distinguishing between those that are fully specified and those that are partially specified. We contrast the treatment effects approach with structural models, and present an example of how a structural model is specified and the particular choices that were made. We cover combining structural estimation with randomized experiments. We then turn to numerical techniques for solving dynamic stochastic models that are often used in structural estimation, again with an example. The penultimate section focuses on issues of estimation using the method of moments.
Full-Text Access
| Supplementary Materials

\”Twenty Years of Time Series Econometrics in Ten Pictures,\” by James H. Stock and Mark W. Watson

This review tells the story of the past 20 years of time series econometrics through ten pictures. These pictures illustrate six broad areas of progress in time series econometrics: estimation of dynamic causal effects; estimation of dynamic structural models with optimizing agents (specifically, dynamic stochastic equilibrium models); methods for exploiting information in \”big data\” that are specialized to economic time series; improved methods for forecasting and for monitoring the economy; tools for modeling time variation in economic relationships; and improved methods for statistical inference. Taken together, the pictures show how 20 years of research have improved our ability to undertake our professional responsibilities. These pictures also remind us of the close connection between econometric theory and the empirical problems that motivate the theory, and of how the best econometric theory tends to arise from practical empirical problems.
Full-Text Access | Supplementary Materials

\”Machine Learning: An Applied Econometric Approach,\” by Sendhil Mullainathan and Jann Spiess
Machines are increasingly doing \”intelligent\” things. Face recognition algorithms use a large dataset of photos labeled as having a face or not to estimate a function that predicts the presence y of a face from pixels x. This similarity to econometrics raises questions: How do these new empirical tools fit with what we know? As empirical economists, how can we use them? We present a way of thinking about machine learning that gives it its own place in the econometric toolbox. Machine learning not only provides new tools, it solves a different problem. Specifically, machine learning revolves around the problem of prediction, while many economic applications revolve around parameter estimation. So applying machine learning to economics requires finding relevant tasks. Machine learning algorithms are now technically easy to use: you can download convenient packages in R or Python. This also raises the risk that the algorithms are applied naively or their output is misinterpreted. We hope to make them conceptually easier to use by providing a crisper understanding of how these algorithms work, where they excel, and where they can stumble—and thus where they can be most usefully applied.
Full-Text Access | Supplementary Materials

\”Identification and Asymptotic Approximations: Three Examples of Progress in Econometric Theory,\” by James L. Powell
In empirical economics, the size and quality of datasets and computational power has grown substantially, along with the size and complexity of the econometric models and the population parameters of interest. With more and better data, it is natural to expect to be able to answer more subtle questions about population relationships, and to pay more attention to the consequences of misspecification of the model for the empirical conclusions. Much of the recent work in econometrics has emphasized two themes: The first is the fragility of statistical identification. The other, related theme involves the way economists make large-sample approximations to the distributions of estimators and test statistics. I will discuss how these issues of identification and alternative asymptotic approximations have been studied in three research areas: analysis of linear endogenous regressor models with many and/or weak instruments; nonparametric models with endogenous regressors; and estimation of partially identified parameters. These areas offer good examples of the progress that has been made in econometrics.
Full-Text Access | Supplementary Materials

\”Undergraduate Econometrics Instruction: Through Our Classes, Darkly,\” by Joshua D. Angrist and Jörn-Steffen Pischke
\”The past half-century has seen economic research become increasingly empirical, while the nature of empirical economic research has also changed. In the 1960s and 1970s, an empirical economist\’s typical mission was to \”explain\” economic variables like wages or GDP growth. Applied econometrics has since evolved to prioritize the estimation of specific causal effects and empirical policy analysis over general models of outcome determination. Yet econometric instruction remains mostly abstract, focusing on the search for \”true models\” and technical concerns associated with classical regression assumptions. Questions of research design and causality still take a back seat in the classroom, in spite of having risen to the top of the modern empirical agenda. This essay traces the divergent development of econometric teaching and empirical practice, arguing for a pedagogical paradigm shift.\”
Full-Text Access | Supplementary Materials

Symposium: Are Measures of Economic Growth Biased?

\”Underestimating the Real Growth of GDP, Personal Income, and Productivity,\” by Martin Feldstein
Economists have long recognized that changes in the quality of existing goods and services, along with the introduction of new goods and services, can raise grave difficulties in measuring changes in the real output of the economy. But despite the attention to this subject in the professional literature, there remains insufficient understanding of just how imperfect the existing official estimates actually are. After studying the methods used by the US government statistical agencies as well as the extensive previous academic literature on this subject, I have concluded that, despite the various improvements to statistical methods that have been made through the years, the official data understate the changes of real output and productivity. The official measures provide at best a lower bound on the true real growth rate with no indication of the size of the underestimation. In this essay, I briefly review why national income should not be considered a measure of well-being; describe what government statisticians actually do in their attempt to measure improvements in the quality of goods and services; consider the problem of new products and the various attempts by economists to take new products into account in measuring overall price and output changes; and discuss how the mismeasurement of real output and of prices might be taken into account in considering various questions of economic policy.
Full-Text Access | Supplementary Materials

\”Challenges to Mismeasurement Explanations for the US Productivity Slowdown,\” by Chad Syverson
The United States has been experiencing a slowdown in measured labor productivity growth since 2004. A number of commentators and researchers have suggested that this slowdown is at least in part illusory because real output data have failed to capture the new and better products of the past decade. I conduct four disparate analyses, each of which offers empirical challenges to this \”mismeasurement hypothesis.\” First, the productivity slowdown has occurred in dozens of countries, and its size is unrelated to measures of the countries\’ consumption or production intensities of information and communication technologies (ICTs, the type of goods most often cited as sources of mismeasurement). Second, estimates from the existing research literature of the surplus created by internet-linked digital technologies fall far short of the $3 trillion or more of \”missing output\” resulting from the productivity growth slowdown. Third, if measurement problems were to account for even a modest share of this missing output, the properly measured output and productivity growth rates of industries that produce and service ICTs would have to have been multiples of their measured growth in the data. Fourth, while measured gross domestic income has been on average higher than measured gross domestic product since 2004—perhaps indicating workers are being paid to make products that are given away for free or at highly discounted prices—this trend actually began before the productivity slowdown and moreover reflects unusually high capital income rather than labor income (i.e., profits are unusually high). In combination, these complementary facets of evidence suggest that the reasonable prima facie case for the mismeasurement hypothesis faces real hurdles when confronted with the data.
Full-Text Access | Supplementary Materials

\”How Government Statistics Adjust for Potential Biases from Quality Change and New Goods in an Age of Digital Technologies: A View from the Trenches,\” by Erica L. Groshen, Brian C. Moyer, Ana M. Aizcorbe, Ralph Bradley and David M. Friedman
A key economic indicator is real output. To get this right, we need to measure accurately both the value of nominal GDP (done by Bureau of Economic Analaysis) and key price indexes (done mostly by Bureau of Labor Statisticcs). All of us have worked on these measurements while at the BLS and the BEA. In this article, we explore some of the thorny statistical and conceptual issues related to measuring a dynamic economy. An often-stated concern is that the national economic accounts miss some of the value of some goods and services arising from the growing digital economy. We agree that measurement problems related to quality changes and new goods have likely caused growth of real output and productivity to be understated. Nevertheless, these measurement issues are far from new, and, based on the magnitude and timing of recent changes, we conclude that it is unlikely that they can account for the pattern of slower growth in recent years. First we discuss how the Bureau of Labor Statistics currently adjusts price indexes to reduce the bias from quality changes and the introduction of new goods, along with some alternative methods that have been proposed. We then present estimates of the extent of remaining bias in real GDP growth that stem from potential biases in growth of consumption and investment. And we take a look at potential biases that could result from challenges in measuring nominal GDP, including those involving the digital economy. Finally, we review ongoing work at BLS and BEA to reduce potential biases and further improve measurement.
Full-Text Access | Supplementary Materials

Articles
\”Social Media and Fake News in the 2016 Election,\” by Hunt Allcott and Matthew Gentzkow
Following the 2016 US presidential election, many have expressed concern about the effects of false stories (\”fake news\”), circulated largely through social media. We discuss the economics of fake news and present new data on its consumption prior to the election. Drawing on web browsing data, archives of fact-checking websites, and results from a new online survey, we find: 1) social media was an important but not dominant source of election news, with 14 percent of Americans calling social media their \”most important\” source; 2) of the known false news stories that appeared in the three months before the election, those favoring Trump were shared a total of 30 million times on Facebook, while those favoring Clinton were shared 8 million times; 3) the average American adult saw on the order of one or perhaps several fake news stories in the months around the election, with just over half of those who recalled seeing them believing them; and 4) people are much more likely to believe stories that favor their preferred candidate, especially if they have ideologically segregated social media networks.
Full-Text Access | Supplementary Materials

\”Yuliy Sannikov: Winner of the 2016 Clark Medal,\” Susan Athey and Andrzej Skrzypacz
Yuliy Sannikov is an extraordinary theorist who has developed methods that offer new insights in analyzing problems that had seemed well-studied and familiar: for example, decisions that might bring about cooperation and/or defection in a repeated-play prisoner\’s dilemma game, or that affect the balance of incentives and opportunism in a principal-agent relationship. His work has broken new ground in methodology, often through the application of stochastic calculus methods. The stochastic element means that his work naturally captures situations in which there is a random chance that monitoring, communication, or signaling between players is imperfect. Using calculus in the context of continuous-time games allows him to overcome tractability problems that had long hindered research in a number of areas. He has substantially altered the toolbox available for studying dynamic games. This essay offers an overview of Sannikov\’s research in several areas.
Full-Text Access | Supplementary Materials

\”Recommendations for Further Reading,\” by Timothy Taylor
Full-Text Access | Supplementary Materials