The Federal Reserve Gets Ready to Buy Corporate Bonds

A few years back, the question \”should the Federal Reserve buy corporate bonds?\” was a hypothetical. Right now, the Fed is poised to plunge into doing so. For details, the New York Federal Reserve has a FAQs webpage: \”Primary Market Corporate Credit Facility and Secondary Market Corporate Credit Facility\” (dated May 4, 2020). The Fed notes:

In general, the availability of credit has contracted for corporations and other issuers of debt while, at the same time, the disruptions to economic activity have heightened the need for companies to obtain financing. These disruptions have been felt by even highly rated companies that need liquidity in order to pay off maturing debt and sustain themselves until economic conditions normalize.

The PMCCF will provide a funding backstop for corporate debt to Eligible Issuers so that they are better able to maintain business operations and capacity during the period of dislocation related to COVID-19. The SMCCF will support market liquidity for corporate debt by purchasing individual corporate bonds of Eligible Issuers and exchange-traded funds (ETFs) in the secondary market. …

The PMCCF will provide companies access to credit by (i) purchasing qualifying bonds as the sole investor in a bond issuance, or (ii) purchasing portions of syndicated loans or bonds at issuance. The SMCCF may purchase in the secondary market (i) corporate bonds issued by investment-grade U.S. companies; (ii) corporate bonds issued by companies that were investment-grade rated as of March 22, 2020, and that remain rated at least BB-/Ba3 at the time of purchase; (iii) U.S.-listed ETFs whose investment objective is to provide broad exposure to the market for U.S. investment-grade corporate bonds; and (iv) U.S.-listed ETFs whose primary investment objective is exposure to U.S. high-yield corporate bonds.

You may have questions. For example, how much money? 

The Fed is starting with $50 billion for the \”Primary\” fund and $25 billion for the \”Secondary\” fund. The idea is to then leverage this amount with debt in a 10:1 ratio so that it could end up financing $750 billion in purchases.

Isn\’t the Fed prohibited by law from investing in US companies? 

Yes. So technically, what\’s happening is that a \”special purpose vehicle\” is being formed. The Fed writes: \”Initially, BlackRock Financial Markets Advisory will be the investment manager, acting at the sole direction of the New York Fed on behalf of the facilities. Once the exigent need to commence operations of the facilities has passed, the investment manager role will be subject to a competitive bidding process.\” 

Isn\’t this evading the law? 

There\’s a provision of the Federal Reserve Act, section 13(3) of the law, which allows \”broad-based\” lending under \”\”unusual and exigent circumstances.\” As a recent discussion of emergency Fed lending from the Congressional Research Service points out, \”This obscure section of the act was described in a 2002 review as follows: \’To some this lending legacy is likely a harmless anachronism, to others it’s still a useful insurance policy, and to others it’s a ticking time bomb of political chicanery.\’ … Section 13(3) authority was not used to extend credit to nonbanks from 1936 to2008. This authority was then used extensively beginning in 2008 in response to the financial crisis—in very different ways than it had been used previously.\” In short, section 13(3) gave the Fed the power to extend it\’s lending in 2008. The Dodd-Frank legislation in 2010 left 13(3) in place. It\’s now being used again in a different way.

How long will this go on?

The Fed writes: \”The CCFs will cease purchasing eligible corporate bonds, eligible syndicated loans, and eligible ETFs no later than September 30, 2020, unless the CCFs are extended by the Board of Governors of the Federal Reserve System and the Department of the Treasury.

Was it a good idea to do this? 

One fundamental job of any central bank, including the Federal Reserve, is to be a \”lender of last resort.\” The basic idea here is that their can be times of panic, when financial markets enter a vicious circle in which lots of borrowers are defaulting because no one is willing to lend, and no one is willing to led because lots of borrowers are defaulting. The central bank can break through this logjam by being willing to lend, or by being willing to purchase debt-based financial instruments at a time of financial difficulty. Indeed, the central bank can often make money by being the one willing to lend in the depths of crisis. For example, the loans the Fed made back in 2008-9 were all repaid in full, and its purchases of mortgage-backed securities at that time have held their value.

Moreover, when the Fed announced this policy on March 23, it had an enormous and immediate effect in corporate credit markets. A number of companies–Ford, Boeing, Carnival, and others–were desperate to borrow, but having a hard time finding a borrower willing to lend at a moderate interest rate. After the Fed announced its policy, which at that moment consisted only of announcing a future willingness to buy corporate debt, credit markets for corporate lending loosened. The famous investor Warren Buffett offered his view that credit markets were approaching a \”total freeze,\” and lavished copious praise on the Fed, saying \”Jay Powell, in my view, and the Fed board belong up there on the pedestal.\”

But there are also dangers here. There are sound reasons to have some separation between the Fed and individual companies, and it seems likely that there will be political pressure, sooner or later, for the Fed to buy debt in companies with the best lobbyists. In addition, I\’ve noted in previous posts over the last few years that the US (and the world) have been experiencing an upsurge of debt, and that the riskiness of debt seems to be rising (for example, here and here).

Thus, the Fed faces a balancing act here. Preventing what Buffett called a \”total freeze\” in credit markets so that companies can keep functioning during the worst of the pandemic can be a sensible \”lender of last resort\” policy. Maybe the Fed\’s promise of being willing to intervene in these corporate credit markets will do most of the work, and at the end of September any additional lending from the Primary Market Corporate Credit Facility and Secondary Market Corporate Credit Facility can be officially closed down.  But having the central bank accumulate large quantities of already shaky corporate debt, especially if some of those decisions seem driven by political considerations, also runs a danger that it could weaken the Fed, both financially and in terms of its independence from the pressures of day-to-day politics. 

Why Is the Health Care System Going Broke During a Pandemic?

You might think that a pandemic would lead to larger revenues for the health care industry. But one of the stranger aspects of the lockdown/shelter-in-place reaction to the pandemic is that in order to help society address a severe health problem, it seemed necessary to drive the health care industry into deep financial hardship. Dhruv Khullar, Amelia M. Bond, and William L. Schpero discuss \”
COVID-19 and the Financial Health of US Hospitals\” in the May 4 issue of the Journal of the American Medical Association. In pretzel-shaped logic, they write:

To limit the spread of disease and create additional inpatient capacity and staffing, many hospitals are closing outpatient departments and postponing or canceling elective visits and procedures. These changes, while needed to respond to the COVID-19 pandemic, potentially threaten the financial viability of hospitals, especially those with preexisting financial challenges and those heavily reliant on revenue from outpatient and elective services.

Let\’s unpack that for a moment. Two goals are listed here. Focus first on the goal of seeking to \”create additional inpatient capacity and staffing.\” As it turned out, closing outpatient departments and canceling elective visits was not actually \”needed to respond to the COVID-19 pandemic.\” Yes, there were some early predictions that it might be needed. But in fact, it wasn\’t. The other goal is \”to limit the spread of disease.\” I\’m certainly no expert here. But it seems to me that hospitals should be pretty good at limiting the spread of disease–it might even be called one of their core competencies. And at least to me, it\’s not easy to follow the logic that a hospital should not treat outpatients and must cancel elective procedures because of fear of spreading the same disease which the hospital expects will fill up all its bedspace.

A May 2020 report from the American Hospital Association, \”Hospitals and Health Systems Face Unprecedented Financial Pressures Due to COVID-19,\” describes the situation this way (footnotes omitted):

To increase personal and public safety across the country while conserving PPE, hospitals moved to cancel nonemergency procedures. At the same time, many Americans have forgone care, including primary care and other specialty care visits. On March 18, the Centers for Medicare & Medicaid Services (CMS) recommended that most elective surgeries and non-essential medical, surgical and dental procedures be cancelled or delayed during the COVID-19 outbreak. Since then, several governors mandated cancellation of non-essential services in their state.

These measures have resulted in adjusted discharges – a measure that accounts for both inpatient and outpatient services – decreasing by 13% from the previous year Health care providers have raised concerns that patients are forgoing important care, such as chronic disease management, which can further jeopardize their health. An additional consequence of these factors has been steep reductions in revenue for all hospitals and health systems across the country. … This report attempts to quantify these effects over the short-term, which are limited to the impacts over a four-month period from March 1, 2020 to June 30, 2020. Based on these analyses, the AHA estimates a total four-month financial impact of $202.6 billion in losses for America’s hospitals and health systems, or an average of $50.7 billion per month.

Thus, in order to prepare the US hospital system for an outbreak of coronavirus, we made policy decisions with the effect of cutting immediate revenues to that same hospital system by $50 billion per month in the last four months. 

Of course, it\’s not just the financial cost. Health care is not being delivered to those who would have been outpatients. \”Elective\” surgery doesn\’t mean \”unnecessary\” surgery, only that the timing of that surgery can be adjusted to some extent. Primary care patients and checkups have taken a hit, too. A potential overload of coronavirus cases was prioritized over actual immediate patients. In addition, the financial losses don\’t only apply to hospitals. For example, nursing homes and care facilities that help people rehabilitate after surgery are taking a hit, too.

When this situation is pointed out, those writing from the health care profession seem determined to defend the policy. We can revisit some other some the interactions of good intentions and uncertain information, and what the author Saul Bellow used to call the Good Intentions Paving Company.
But imposing large financial costs on US health care system and on the health of other patients out of a fear of a coronavirus overload has turned out to be a mistake, and you have to acknowledge mistakes to learn from them.

Paul Romer: From Pin Factory to Chaos Monkey

Isaac Chotiner has a short interview with Paul Romer in the New Yorker (May 3, 2020, \”Paul Romer on How to Survive the Chaos of the Coronavirus\”). Lots of interesting comments, but I was especially struck by Paul\’s comment about  how the pandemic poses a challenge for economists when thinking about the benefits of specialization and the division of labor. The usual concerns raised about division of labor, by economists since Adam Smith and Karl Marx, is that workers can be trapped for life in mindless repetitive labor. However, Romer raises a different concern about the tradeoff between specialization and resiliency: 

The gains from specialization go all the way back to Adam Smith. He talked about the advantage of a bigger market being that we could have a finer division of labor and be more specialized. There’s this great story about the pin factory where people do various different pieces of the job of making pins. So, we’ve been very attuned to the efficiency gains that come from finer and finer division of labor and specialization. What we’ve underestimated is the systemic risk that that very finely tuned system of specialization exposes us to. And so I think we will start to ask whether there are ways that we could build some more robustness into our whole system.

If I can use an analogy, Netflix used this thing they called the Chaos Monkey, which would go in and just break servers, break routers, just take them offline and then make sure that the Netflix infrastructure system could still keep working. I think, from a public-policy perspective, it’d be good if we started having some drills where we just break things, like, “O.K., you can’t import that input into your pharmaceutical process for six months,” or, “You can’t rely on this mechanism.” We may need a little bit of a Chaos Monkey to help make sure that we’re all building a little bit more resiliency into the things that we do.

Journal of Economic Theory: Fifty Years, Fifty Articles

The Journal of Economic Theory is commemorating 50 years of publication with a special issue that includes 50 of the most prominent papers the journal has published, all of which appear to be open access. As seems appropriate for such lists, only one of the included papers is from the last decade. Instead, these JET papers were a key inflection point for ideas that, guided by the structure and insights from these papers, then developed a substantial follow-up literature of their own. The well-educated economist will already be somewhat familiar with the core findings of many of these papers, so this issue offers a chance to get reacquainted with old favorites and make some new friends, too. 

Karl Shell, the first editor of JET, also contributes a short essay with some memories of the founding of the journal, \”Fifty years of the Journal of Economic Theory,\” including that time when a mathematical economist was \”an economist who knew a bit of calculus\” and when a number of leading journals \”did not do dots\”–that is, the journal would not allow an author to use the common mathematical symbol of putting a dot over a variable to indicate differentiation over time. Shell writes:

About 52 years ago, the Academic Press (AP) mathematics editor, Edwin Beschler, and I met for lunch at the MIT faculty club. We sat at a table for two, not at the usual economics round table, at which Paul Samuelson held court.

Edwin is a central character in the JET story. He was a professional actor. He had been a math undergrad. He was a splendid science editor. Much of what I know about publishing is from Edwin. … At the lunch, Edwin said that a committee of economists suggested that he approach me about the possibility of editing a new journal to be called “The Journal of Mathematical Economics”. I expressed interest but only if AP would accept the alternative title: Journal of Economic Theory. Edwin called later to report that the committee would accept my proposed title. He invited me to talk turkey at the AP offices at 111 Fifth Avenue in New York. AP and I struck the deal. …

What is wrong with “Mathematical Economics”?\\

Nothing!

Back in the sixties, the term could be vague. A mathematical economist might be an economist who knew a bit of calculus. Or, he might be someone teaching microeconomics, remedial math, and/or econometric theory. These definitions became unsupportable. We wanted to focus on high-brow and middle-brow and even low-brow economic theory. (In Bob Solow\’s definition, the brow-level was a measure of the level of the math. I think Bob is proud of being thought of as a middle-brow economist.) On the other hand, we strove to include papers on related mathematics and computation. Math is an essential tool, but our focus was on economic theory. …

What were the publication options for economic theorists before JET?

  • Econometrica, the ES society journal, was — and is — an excellent outlet for economic theory, mathematical economics, and econometrics. It was plagued with senior staffing problems in the 60\’s. The editor tendered his resignation to become the dean at Northwestern. An unverified story has it that an ES committee was tasked with naming a new editor. According to the story, the committee would gather from round the world and work out a list of top candidates. None of the candidates turned out to be willing. And so on and on seriatim. When the editor became the Northwestern president, submissions were boxed awaiting a new editor. The boxed submissions provided opportunity for JET.
  • The Review of Economic Studies was an excellent outlet for growth theory and other theory. It was a UK society journal.
  • The JPE is the house journal of Chicago. It was a good place to publish. They were not closed to theory.
  • The QJE was the house journal of Harvard. They were insular. I published with Joe Stiglitz an article on the allocation of investment. We used superior dots to denotes time differentiation. We had to find another symbol because HU Press did not do dots.
  • The AER is the society journal of the AEA. In my paper on inventive activity and growth, they did attempt dots, but depending on the position in the production line some or all of the dots broke off. In the early years of JET, the AER editor used a printed postcard for rejections: “Your paper put me to sleep. Submit it to JET.” The AER editor rejected the Lucas classic included in this anniversary issue.
  • The IER was the house journal of Penn and Osaka. The IER was dedicated to quantitative economics, including mathematical economic theory.

So the field was not completely wide open, but it was a good time to start a journal in our rapidly growing field.

China\’s Vanishing Trade Surplus

I wrote \”China’s vanishing trade surplus: Now you see it, now you don’t,\” for the most recent issue of the Milken Institute Review (Second Quarter 2020).  Here are the opening paragraphs: 

When you think of China, images of R95 face masks, deserted streets and makeshift hospitals no doubt come to mind. But coronavirus notwithstanding, the dominant reality of contemporary China is its formidable economic footprint on the global economy — and its legendary trade surplus, in particular.

We all know that China’s economic success depends on running gigantic trade surpluses. Well, not any more. China’s surplus has been small relative to the size of its economy for a decade and has been approaching zero in the past few years. Indeed, a November 2019 working paper from the International Monetary Fund predicted that China would begin to run a small current account trade deficit in coming years.

Here I’ll explain why China’s trade surpluses mushroomed in size from 2001 to 2007, but then quickly slipped back to pre- 2001 levels. The chronology offers some insight into the fundamental drivers of trade balances (in China or any other economy) and why China’s trade balance is now headed toward deficit. I’ll also opine on what this shift is likely to mean for the ongoing U.S.- China trade war — and for the world economy.

Some Thoughts on Commodification

\”Commodification\” is a high-sounding has several quite different meanings.

One meaning has Marxist overtones, because in fact it traces to the discussion in Karl Marx\’s Capital: for example, see \”Section 4: The Fetishism of Commodities and the SecretThereof.\” Marx argues that \”the products of labour become commodities.\”  He offers the homely example of a table. He argues that when commodities are bought and sold, society tend to forget that then only have value because of the labor embedded within them, and instead treat these inanimate objects as if they they were meaningful or valuable in themselves (\”fetishization\”). In this way, Marx argues, the commodification of labor conceals the underlying realities that all value is produced by labor and also about social relationships of different classes.

A second meaning of commodification, from the Merriam-Webster dictionary, is described as the \”Financial Definition\”: \”Commodification refers to a good or service becoming indistinguishable from similar products. … To be considered a commodity, an item must satisfy three conditions: 1) it must be standardized and, for agricultural and industrial commodities, in a \”raw\” state; 2) it must be usable upon delivery; and 3) its price must vary enough to justify creating a market for it.\” Examples given include commodities like corn and soybeans, but also financial instruments like mortgages that can be bought and sold.\”

A third meaning of commodification is nicely phrased by Stephen Clowney in his article\” Does Commodification Corrupt? Lessons from Paintings and Prostitutes\” (Seton Hall Law Review, 2020, vol. 50, issue 4).  He writes: \”Commentators fear that when we treat priceless things like fungible commodities—reducing them to dollar figures, putting them in advertisements, and stocking them on shelves—it becomes difficult to appreciate their higher order values.\” Notice that a concern over whether, say, distinctive works of art become underappreciated when they are bought and sold in monetary terms is quite different from whether markets for interchangeable soybeans and mortgages work well. In turn, both of these are quite distinctive from whether it is useful to think of economic output as nothing more than a manifestation of labor.

Clowney\’s essay discussed a number of cases where concerns have been raised about \”commodification,\” including examples familiar to many economists like paying money for blood donations or for organ donors.   He interviewed a group of 20 professional art appraisers–that is, people whose job is to put a monetary value on art. His questions were meant to explore whether this process in some way affected or reduced their aesthetic appreciation of the art. He writes (footnotes omitted):

Does commodification corrupt? The central finding of my research is that putting prices on creative masterworks does not diminish appraisers’ ability to experience the transcendent values of art. Of the twenty assessors interviewed for this study, not one reported that market work disfigured their ability to enjoy the emotional, spiritual, and aesthetic qualities of artistic masterworks. In fact, most appraisers insisted they can easily and completely compartmentalize their professional duties from their private encounters with art. …  Contrary to the predictions of market skeptics, the appraisers in this study spoke with joyful enthusiasm about their experiences viewing exceptional works of art. Even the most senior appraisers—those who have monetized thousands and thousands of objects—remain passionate consumers of art in their personal lives. The professionals I interviewed all reported visiting museums for pleasure, and many collect art to display in their homes. As a group, they described seeing beautiful pieces as “a charge,” “a rush,” “a thrill,” “fabulous,” “a giggle fest,” “exciting,” and “delight[ful].”

Clowney concludes that the \”market skeptics have overstated the power of commerce to corrupt the meaning of sacred goods.\” While I agree with that conclusion, saying that a concern is \”overstated\” is not the same as saying that the concern isn\’t a real and meaningful one.

The subject of economics is rigorous in stating that the monetary price of an object is not a measure of its value in an deeper sense. An early famous example is Adam Smith\’s diamond-water paradox, where he explains why some objects with extraordinarily high and even life-preserving value, like water, have low prices, while other objects that are just decorative, like diamonds, have high prices. Smith argues that value-in-exchange, which is an outcome of conditions of supply and demand, is a different concept than value-in-use.

In a similar spirit, one might plausibly argue that concerns over \”commodification\” are missing the point that \”value-in-exchange\” is not the same as \”value-in-appreciation.\” Just as it would be a shortcoming of empathy and awareness to treat a painting as nothing more than a price tag, it would be a moral shortcoming to view another human being as nothing more than the exchange-value of their commodified labor. Similarly, it would be a category error to view human workers as interchangeable soybeans. Clowney\’s art appraiser say that they can \”easily and completely compartmentalize their professional duties from their private encounters with art.\” For many economic purposes, a compartmentalization between professional and private is appropriate.

And yet, and yet. Markets reveal do attitudes about how others perceive value, and people are social attitudes. Even among the art appraisers, for example, one suspects that their compartmentalization is incomplete, in the sense that heir pulses beat a little faster when art prices are rising or falling dramatically.

As befits a term with several distinctive meanings, \”commodification\” is worth deeper thought. Ultimately, it seems to me that concerns about commodification may be less likely to hold true in cases of highly-skilled workers or high-quality or distinctive products, because in such cases the monetary prices are likely to reinforce and support the appreciation of these skills, qualities, and details. Conversely, concerns about commodification are more of an issue with lower-skilled workers and low-quality or extremely similar products. In markets for commodities like soybeans, crude oil, and mortgage-based securities, we can just sit back and appreciate how these market function smoothly. But when lower-skilled workers are treated as nothing more than the market value of their output, this seems troubling.

Even in a case like paying organ donors, the main concern about commodification seems to me not that a few donors would be compensated (perhaps by health insurance), while recipients of those organs personally recognized the virtue of the donors. The nightmare scenario is medical assembly lines to extract organs, backed by social or government pressure that low-income individuals are expected to raise money by doing so.

I also wonder if people (meaning me) may have a tendency to undervalue the pleasures of what is inexpensive or free, because the low or zero price put upon these goods does not reinforce their value. After a long walk on a hot day, does a glass of tap water over ice taste as good as a bottled water from from the refrigerator? Do I give enough value to sitting on my own lawn furniture at my own house? Serious thinkers from Samuel Johnson to Blaise Pascal have asked whether people are likely to chase diversions, rather than seek happiness in being at home.

Homage: I ran across a mention of Clowney\’s article in a post by Alex Tabarrok at the ever-useful Marginal Revolution website.

US Population Growth at Historic Lows

The 2010s are the slowest decade for population growth in US history, slower even than the 1930s. And it looks as if the next few decades will be even slower.  William Frey offers an historical perspective in \”Demography as Destiny\” (Milken Institute Review, Second Quarter 2020, pp. 56-63).

For a forward-looking projection, Jonathan Vespa, Lauren Medina, and David M. Armstrong have written \”Demographic Turning Points for the United States: Population Projections for 2020 to 2060\” (US Census Bureau, Issued March 2018, revised February 2020). The US population growth rate in the decades of the 2010s was 7.1%, as shown in the graph above. The Census predictions are for US population growth of 6.7% in the 2020s, 5.2% in the 2030s, and 4.1% in the decade of the 2040s. They write (references to figures omitted): 

The year 2030 marks a demographic turning point for the United States. Beginning that year, all baby boomers will be older than 65. This will expand the size of the older population so that one in every five Americans is projected to be retirement age. Later that decade, by 2034, we project that older adults will outnumber children for the first time in U.S. history. The year 2030 marks another demographic first for the United States. Beginning that year, because of population aging, immigration is projected to overtake natural increase (the excess of births over deaths) as the primary driver of population growth for the country. As the population ages, the number of deaths is projected to rise substantially, which will slow the country’s natural growth. As a result, net international migration is projected to overtake natural increase, even as levels of migration are projected to remain relatively flat. These three demographic milestones are expected to make the 2030s a transformative decade for the U.S. population. 

But barring such an event, it\’s safe to say that the US economy is headed for uncharted demographic waters, which in turn may shift economic patterns. Is it just a coincidence that when population growth rates started slowing down after the 1960s, so did rates of US economic growth? And that the spurt of US economic growth in the 1990s coincided with an upward bump in the population growth rate? 
For example, higher rates of population growth in the past often meant an expanding market for goods like houses and cars. Conversely, it seems plausible that as population growth rates fall, and the proportion of elderly rises, the house-building industry will become a smaller share of the economy. However, one can imagine scenarios where people use more living space, or perhaps it becomes more common to own a faraway property that can be rented out much of the year, where construction continues at a similar pace. A slowing rate of population growth, especially in working-age adults, should in theory help the position of labor in the US economy. But of course, one can imagine scenarios where many people decide to work at least part-time until later ages. 
It\’s not foreordained that a slowdown in population growth rates will cause per capita growth to fall. But combined with an aging population, it will surely cause dramatic shifts in economic patterns and sources of growth over time. 

1957: When Machines that Think, Learn, and Create Arrived

Herbert Simon and Allen Newell were pioneers in artificial intelligence: that is, they were among the first to think about the issues involved in designing computers that were not just extremely fast at doing the calculations for well-structured problems, but in designing computers that could learn from their own mistakes and teach themselves to do better. Simon and Newell shared the Turing prize, sometimes referred to as thfe \”Nobel prize in computing\” in 1975, and Simon won the Nobel prize in economics in 1978.

Back in 1957, Simon and Newell made some strong claims about the near-term future of these new steps in computing technology. In a speech co-authored by both, but delivered by Simon, he said:

[T]he simplest way I can summarize the situation is to say that there are now in the world machines that think, that learn, and that create. Moreover, their ability to do these things is going to increase rapidly until in a visible future–the range of problems they can handle will be coextensive with the range to which the human mind has been applied.

The lecture was published in Operations Research, January-February 1958, under the title \”Heuristic Problem Solving: The Next Advance Operations Research\” (pp. 1-10).  Re-reading the lecture today, one is struck by the extreme changes that these extremely well-informed authors expected to occur within a horizon of about 10 years. However, about 60 years later, despite extraordinary changes in computing technology, software, and information technology more broadly, we are still some distance from the future that Simon and Newell predicted. Here\’s an additional flavor of the Simon and Newell argument from 1957.

Here is their admission that up to 1957, computing power and operations research had focused mainly on well-structured problems:

In short, well-structured problems are those that can be formulated explicitly and quantitatively, and that can then be solved by known and feasible computational techniques. … Problems are ill-structured when they are not well-structured. In some cases, for example, the essential variables are not numerical at all, but symbolic or verbal. An executive who is drafting a sick-leave policy is searching for words, not numbers. Second, there are many important situations in everyday life where the objective function, the goal, is vague and nonquantitative. How, for example, do we evaluate the quality of an educational system or the effectiveness of a public relations department?\’ Third, there are many practical problems–it would be accurate to say \’most practical problems\’–for which computational algorithms simply are not available.

If we face the facts of organizational life, we are forced to admit that the majority of decisions that executives face every day and certainly a majority of the very most important decisions lie much closer to the ill-structured than to the well-structured end of the spectrum. And yet, operations research and management science, for all their solid contributions to management, have not yet made much headway in the area of ill-structured problems. These are still almost exclusively the province of the experienced manager with his \’judgment and intuition.\’ The basic decisions about the design of organization structures are still made by judgment rather than science; business policy at top-management levels is still more often a matter of hunch than of calculation. Operations research has had more to do with the factory manager and the production-scheduling clerk than it has with the vice-president and the Board of Directors.

But by 1957, the ability to solve ill-structured problems had nearly arrived, they wrote:

Even while operations research is solving well-structured problems, fundamental research is dissolving the mystery of how humans solve ill-structured problems. Moreover, we have begun to learn how to use computers to solve these problems, where we do not have systematic and efficient computational algorithms. And we now know, at least in a limited area, not only how to program computers to perform such problem-solving activities successfully; we know also how to program computers to learn to do these things.

In short, we now have the elements of a theory of heuristic (as contrasted with algorithmic) problem solving; and we can use this theory both to understand human heuristic processes and to simulate such processes with digital computers. Intuition, insight, and learning are no longer exclusive possessions of humans: any large high-speed computer can be programmed to exhibit them also.

I cannot give here the detailed evidence on which these assertions–and very strong assertions they are–are based. I must warn you that examples of successful computer programs for heuristic problem solving are still very few, One pioneering effort was a program written by O.G. Selfridge and G. P. Dinneen that permitted a computer to learn to distinguish between figures representing the letter 0 and figures representing A presented to it \’visually.\’ The program that has been described most completely in the literature gives a computer the ability to discover proofs for mathematical theorems–not to verify proofs, it should be noted, for a simple algorithm could be devised for that, but to perform the \’creative\’ and \’intuitive\’ activities of a scientist seeking the proof of a theorem. The program is also being used to predict the behavior of humans when solving such problems. This program is the product of work carried on jointly at the Carnegie Institute of Technology and the Rand Corporation, by Allen Newell, J. C. Shaw, and myself. 

A number of investigations in the same general direction-involving such human activities as language translation, chess playing, engineering design, musical composition, and pattern recognition are under way at other research centers. At least one computer now designs small standard electric motors (from customer specifications to the final design) for a manufacturing concern, one plays a pretty fair game of checkers, and several others know the rudiments of chess. The ILLIAC, at the University of Illinois, composes music, using I believe, the counterpoint of Palestrina; and I am told by a competent judge that the resulting product is aesthetically interesting.

So where would what we now call \”artificial intelligence\” be in 10 years?

On the basis of these developments, and the speed with which research in this field is progressing, I am willing to make the following predictions, to be realized within the next ten years: 

1. That within ten years a digital computer will be the world\’s chess champion, unless the rules bar it from competition.
2. That within ten years a digital computer will discover and prove an important new mathematical theorem.
3. That within ten years a digital computer will write music that will be accepted by critics as possessing considerable aesthetic value.
4. That within ten years most theories in psychology will take the form of computer programs, or of qualitative statements.

It is not my aim to surprise or shock you if indeed that were possible in an age of nuclear fission and prospective interplanetary travel. But the simplest way I can summarize the situation is to say that there are now in the world machines that think, that learn, and that create. Moreover, their ability to do these things is going to increase rapidly until in a visible future the range of problems they can handle will be coextensive with the range to which the human mind has been applied.

I love the casual mention–in 1957!–that humans are already in the age of nuclear fission and prospective interplanetary travel. Do we still live in the age of nuclear fission and prospective interplanetary travel? Or did we leave it behind somewhere along the way and move to another age?

It\’s not that these the predictions by Simon and Newell are necessarily incorrect. But many of tehse problems are evidently harder than they thought. For example, computers are now stronger chess players than humans, but it took until 1997–with vastly more powerful computers after many doublings of computing power via Moore\’s law, before IBM\’s Deep Blue beat Garry Kasparov in a six-game match. Just recently, computer programs have developed that can meet a much tougher conceptual challenge–consistently drawing, betting, and bluffing to beat a group of table of five top-level human players at no-limit Texas hold \’em poker

Of course, overoptimism about artificial intelligence back in 1957 does not prove that similar optimism at present would be without foundation. But it does suggest that those with the highest levels of imagination and expertise in the field may be so excited about its advances that they have at tendency to understate its challenges. After all, here in 2020, 63 years after Simon and Newell\’s speech, most of what we call \”artificial intelligence\” is really better-described as \”machine learning\”–that is, the computer can look at data and train itself to make more accurate predictions.  But we remain a considerable distance from the endpoint described by Simon, that \”the range of problems they [machines] can handle will be coextensive with the range to which the human mind has been applied.\”

Homelessness, Temperatures, Shelter Rate, Bed Rate

How does the homelessness situation vary across states? Brent D. Mast offers some basic facts in \”Measuring Homelessness and Resources to Combat Homelessness with PIT and HIC Data\” (Cityscape, vol 22, no. 1, published by US Department of Housing and Urban Development, pp. 215-225).

The article offers useful quick overview and critique of the PIT and HIC data. The Point-in-Time (PIT) data is a national survey about the number of homeless conducted each year during the last 10 days in January. It is widely believed to understate the total number of homeless–a group it is of course difficult to count–but if the degree of understatement is similar from year to year, it can still offer a useful measure. The Housing Inventory Count (HIC) data is an \”annual inventory of the beds, units, and programs\” to serve the homeless.

I unabashedly admit that part of what caught my eye in Mast\’s article were his innovative figures for displaying this data. For example, the first column of figures here shows the temperature in January (when the PIT survey is done) for each state. The states are in groups of four. There\’s a tiny map of the US just to left, and the color of the four states illustrated on the first tiny map match the four colors of the point. Thus, the red dot in the top cluster of four states is Alaska, the red dot in the next cluster of four states is Wisconsin, and so on.

As the figure illustrates, the correlation between January temperature (neatly organized from top to bottom) and level of homelessness (much more scattered, in the far-right column) isn\’t very large. Yes, down at the bottom of the table there are states with moderate January temps that have high homelessness, like California, Nevada, Washington, and Oregon. But there are also states with moderate January temps that have lower levels of homelessness, like Florida, Arizona, Lousiana, and Texas. Conversely, there are states with colder-weather January temps like New York, Massachusetts, and Alaska but fairly high homelessness rates.

The middle column shows what share of the homeless are in some kind of shelter. Here, the western coast states of California, Oregon, and Nevada stand out for relatively low rates. Other states with colder Januaries but high rates of homelessness, like New York, Massachusetts, and Alaska, have a much higher share of the homeless in shelters. As Mast points out: \”This inverse relationship could reflect less necessity for providers to shelter homeless populations in warmer winter climates, or a decreased preference of homeless people to seek winter shelter in warmer states.\”

The next figure shows the ratio of beds to the total homeless population. Again, the first column ranks the states by average January temperature. Don\’t be too concerned that the \”bed ratio\” exceeds 100% for some states. Remember that this method of counting the homeless–done once a year– probably understates the total, and so some states are presumably planning for the actual number of homeless who show up on the worst nights.

In general, states with colder Januaries do tend to have a higher ratio of beds-per-homeless. However, this figure shows that the three states with very low shelter rates for the homeless–California, Oregon, and Nevada–are also the three states with very low rates of beds-for-the-homeless relative to the number of homeless. This pattern tends to suggest that the lack of beds may help to explain the low proportion of homeless people in shelter in these states. As Mast gently puts it: \”The positive relationship between the proportion sheltered and the bed ratio reflects the fact that one might expect a greater proportion of homeless persons to be sheltered when more beds are available relative to the homeless population.\”

Some Thought on US Pharmaceutical Prices and Markets

There\’s a broadly shared idea of how US pharmaceutical markets ought to work, I think. Innovative companies invest large amounts in trying to develop new drugs. Sometimes they succeed, and when they do, the company can then earn high profits for a time. But eventually, the new products will go off patent, and inexpensive generic equivalents will be produced.

Describing the basic story in that way also helps to organize the common complaints about what is seems to be wrong. During the time period when new drugs are under patent, it seems as if their prices rise so quickly and to such high levels that it feels exploitative. Companies focus on taking actions to put off the time when competition from generic drugs can enter the market. Meanwhile, we find ourselves in the position of desperately wanting many companies to make large efforts to develop the new drugs/vaccines we sorely need to address COVID-19, and we know that only a few will ultimately succeed. However, a number of politicians feel compelled to proclaim that if and when such successes occur,  those companies should stand ready to provide these newly invented and effective drugs and vaccines at the lowest possible cost and not expect to make much profit in doing so. We are simultaneously discovering that we are highly dependent on a limited number of foreign manufacturers for our supply of workhorse generic drugs, and the Food and Drug Administration has announced that the US healthcare system faces shortages of about 100 drugs.

In short, the problem of the US pharmaceutical industry go far beyond the standard complaint about high prices. To help disentangle these issues, the Journal of the American Medical Association has a group of research and viewpoint/editorial article in the issue of March 3, 2020. Here, I\’ll just list some themes from these articles.

1) How much are drug prices rising? 

If you look just at branded medications, the prices are up substantially. As Chaarushena Deb and Gregory Curfman write in their essay: \”Relentless Prescription Drug Price Increases\”: 

The pharmaceutical industry just announced prescription drug price increases for 2020. According to the health care research firm 3 Axis Advisors, prices were increased for nearly 500 drugs, with an average price increase of 5.17%. To mitigate public criticism, most of the price increases were kept below 10%. The list price of the world’s bestselling drug, adalimumab (Humira), was increased by AbbVie by 7.4% for 2020, which adds to a 19.1% increase in list price for years 2018 and 2019.

For economists, of course, there\’s always an \”on the other hand.\” If you combine prices for both branded and generic prescription drugs, and take into account how cheaper generics are displacing branded drugs for certain uses, the overall price level of prescription drugs in the US actually fell last year according to the Consumer Price Index. 

In addition, as Kenneth C. Frazier points out in his essay, \”Affording Medicines for Today’s Patients and Sustaining Innovation for Tomorrow,\” net drug prices (after manufacturer discounts) have been stable since 2015; if you take new drugs into account, the net drug prices have been falling since 2015. But as Frazier points out, the \”net\” drug price isn\’t the same as what patients actually pay.  

Manufacturer discounts from list prices are generally not passed on to patients, and many patients are exposed to the full list price of drugs before they reach their deductibles, out-of-pocket spending caps (if they have one), or both. In fact, about 50% of the total amount spent on branded prescription drugs is retained by payers, hospitals, distributors, and others in the supply chain, not the manufacturer.

Thus, the problem of patients facing high drug prices isn\’t all about what the manufacturer is charging: it\’s also about the add-on costs from the rest of the supply chain.

2) How high are profits for the US pharmaceutical industry? 

David M. Cutler asks \”Are Pharmaceutical Companies Earning Too Much?\” As he points out, one of the research studies in the issue \”showed that from 2000 to 2018, the median net income margin in the pharmaceutical industry was 13.8% annually, compared with 7.7% in the S&P 500 sample.\” Another of the studies in the issue suggest that it costs an average of nearly $1 billion in research and development expenditures to bring a new drug to market (a number which includes false starts and failed efforts in the total costs).

On the other hand, as Cutler also points out:

Like several other industries (eg, software and motion picture production), the pharmaceutical industry has very high fixed cost and very low marginal cost. It takes substantial investment to discover a drug or develop a complex computer code, but the cost of producing an extra pill or allowing an extra download is minimal. The way that firms recoup these fixed costs is by charging above cost for the product once it is made. If these upfront costs are not accounted for, the return on the marketed good will look very high.

In addition, these high profits are focused on the big and successful drug companies: \”A good number of recent innovations have come from the startup industry, not established pharmaceutical firms, although major pharmaceutical firms are involved in clinical testing and sales.\” In other words, it\’s not fair to judge the profitability of the overall drug industry based only on the big successes; one would also need to take into account the losses at all the companies that tried and failed, too.  

It\’s also worth noting that a research study in this issue finds that in recent years, drug company profits haven\’t looked so gaudy. As Frazier points out in his essay: 

Likewise, Ledley et al report that over the past 5 years, between 2014-2018, pharmaceutical net income was markedly lower than in earlier years, and there was no significant difference between the net income margin of pharmaceutical companies compared with other S&P 500 companies during this period. 

3) How do pharmaceutical firms use the patent system to keep prices high for brand-name drugs?

In their essay on \”Relentless Prescription Drug Price Increases,\” Chaarushena Deb and Gregory Curfman point out some of the ways that firms earning high profits from patent-protected brand-name drugs. For example, one approach is called pay for delay: \”Such tactics involve payments from brand-name companies to generic companies to keep lower-cost generic drugs off the market, and both brand-name and generic companies profit from these arrangements. These arrangements are commonplace, and with the elimination of market competition, brand-name companies are at liberty to keep their prices high—as high as the market will bear.\” The Supreme Court ruled back in 2013 that pay-for-delay can be challenged in court as potentially anticompetitive, but there is no guarantee that the antitrust prosecutors will win such lawsuits. 

Another possibility is for a company to create a \”patent thicket\” of many overlapping patents in a way that makes it especially risky for any new entrant. Deb and Curfman write:

In response to these price hikes for Humira, AbbVie has recently been the subject of a series of groundbreaking classaction lawsuits. Insurance payers and workers’ unions allege that AbbVie created a “patent thicket” around the monoclonal antibody therapy, thereby acting in bad faith to quash competition from Humira biosimilars. The original Humira patent expired in 2016, but AbbVie has been able to stave off biosimilar market entry by filing more than 100 follow-on patents that extend AbbVie’smonopoly beyond 2030. It is not uncommon for drugs to be protected by multiple patents, but the Humira patent thicket is extreme and allows AbbVie to aggressively extend its high monopoly pricing. A second claim in the lawsuits against AbbVie is that the company allegedly used “pay-for-delay” tactics to negotiate later market entry dates with biosimilar competitors. Pay-for-delay agreements in the pharmaceutical industry have been controversial for years, but the notion of a “patent thicket” greatly exacerbates the issue because the normal route for generics and biosimilars to enter the market is through patent litigation. … AbbVie contended it would continue to sue biosimilar manufacturers for infringement using its full complement of patents, pushing market entry dates well into the 2030s, leading the biosimilar companies to simply give up and settle the litigation. These settlements will likely allow AbbVie to continue instituting price increases for Humira.

4) What about drugs where the US has shortages? 

Inmaculada Hernandez, Tina Batra Hershey, and Julie M. Donohue write: \”Drug Shortages in the United States Are Some Prices Too Low?\” They note that the Food and Drug Administration has regular reports listing drugs with a shortage–apparently because there isn\’t enough incentive for companies to enter the market or to invest in manufacturing. They write: 

Drug shortages disproportionately affect generic, injectable medications, which have been marketed for decades and have lower prices even when compared with other generic products. These shortages affect essential drugs (injectable antibiotics, such as vancomycin and cefazolin; chemotherapeutic agents, such as vincristine and doxorubicin; and anesthetics, such as lidocaine and bupivacaine) and therefore have major public health consequences, including delays in or omission of doses; use of less effective treatments; increased morbidity; and even death. Assessing the causes of and potential solutions to drug shortages is timely because the number of drugs in active shortage has increased recently, from 60 per week in 2016 to more than 100 in 2018.

Sometimes the shortage occurs because the sale and price of the drug have been falling, and producers exit the market. But they also dig into the dynamics of markets for generic drugs, which account for two-thirds of the shortages. They point out that production of these drugs often involves a \”sponsor\” for the generic drug, which then has agreements with an independent supplier to produce the active ingredients and with a contract manufacturing organization to produce the actual drug. They write: 

At nearly every point in this system, the market has become more concentrated, meaning a small number of companies account for a large share of the market, and concentration is at the root of shortages. … Drug shortages are more likely to occur in markets with only 1 to 3 generic sponsors. Second, because of consolidation of suppliers, competing generic sponsors often rely on a single active ingredient supplier. Third, it is increasingly common for a single contract manufacturer to produce the final dosage forms for all generic sponsors marketing a given product. Moreover, 90% of active ingredients and 60% of dosage forms dispensed in the United States are manufactured overseas, complicating FDA monitoring efforts. Market concentration is the underlying reason why markets are so slow in responding to shortages.When production is halted for quality control problems (eg, the sterile injectables produced by the manufacturing facility are nonsterile or contain metal particulates), there is no alternative facility available.

Then on the purchasing side, 

\”Health systems and pharmacies, which administer or dispense drugs to patients, often purchase drugs through intermediaries, such as wholesalers and group purchasing organizations (GPOs). GPOs are highly concentrated; the top 4 now account for 90% of the market. The market power of GPOs has reduced prices for health systems but, according to the FDA, has also contributed to a “race to the bottom,” ie, offering the drug at the lowest price possible, which has decreased generic sponsors’ profitability, especially in the case of injectables, which are costly to manufacture. Importantly, because generic drugs are bioequivalent and exchangeable, there is no mechanism in the purchasing system to reward high-quality production, even though FDA asserts differences in the quality of manufacturing practices exist and are inextricably linked to shortages. Concentration among intermediaries in the drug purchasing system is a likely factor in driving the prices of some generics so low that generic sponsors do not see them as profitable.

As a result of these market dynamics, a number of these workhorse generic drugs either experience shortages on a regular basis, or may have only one supplier. There have been several cases where a supplier of a generic drug noticed that there was no competition, and then raised prices substantially. 
Taking all of this together, one can begin to imagine a policy agenda for the US pharmaceutical industry that is just a wee bit more sophisticated than simple price controls on drugs or punitive taxes on drug companies. 
1) We want to continue invest billions of dollars in new drugs. Some of the funding can come government, perhaps directed though both higher education and private-sector settings. But some will also come from profits previously earned by drug companies. 
2) The antitrust authorities have some tools to put downward pressure on prices of brand-name drugs, by energetically challenging pay-for-delay, patent thickets, and other questionable approaches. 
3) We need to encourage competition in the market for generic drugs, to assure a steady supply of high-quality drugs. This involves encouraging firms that make active ingredients, contract manufacturing firms, and the \”sponsor\” firms that get the regulatory approval and do the marketing to for these drugs. Greater competition should help to avoid shortages. 
4) Some drugs can have such high costs, and such modest benefits for health, that it\’s questionable whether insurance should cover them. For example, certain anti-cancer drugs probably fall into this category. In this situation, we want to encourage continued research which may eventually produce a less expensive drug with better health effects, and so some patients should have access to the drug as part of such studies. But for some drugs, a super-high price in exchange for extending life expectancy by only a month or two is way of saying that they aren\’t yet ready for the mass market. Of course, many other drugs are a fantastic investment on cost-benefit grounds. Given the extreme economic costs for dealing with the COVID-19 pandemic, finding cost-effective tests, treatments, or vaccines seems as if it should be a fairly low bar to cross.