19th Century Fencing and Information Technology

It\’s no surprise that US investment is disproportionately focused on information technology. The broad category of information processing technology and equipment was 8% of all private nonresidential US investment in 1950, but 30% of all investment by 2012. This raises the question: Is there a previous time in U.S.  history when investment has been so  heavily focused in a single category?

David Autor offers a possible answer: Investment in fences in the late 19th century U.S. economy. The answer is side comment in Autor\’s paper \”Polanyi\’s Paradox and the Shape of Employment Growth,\” presented in August at the Jackson Hole conference sponsored by the Kansas City Federal Reserve. The paper is well worth reading for what it has to say about the links from automation to jobs and wages. Here, I\’ll offer some thoughts of my own about fencing and information technology.  (Full disclosure: Autor is the Editor of the Journal of Economic Perspectives, and thus my boss.)

Richard Hornbeck published \”Barbed Wire: Property Rights and Agricultural Development\”, in a 2010 issue of Quarterly Journal of Economics (vol. 125: 2, pp. 767-810). He argues for the importance of fencing in understanding the development of the American West. Hornbeck writes (citations and footnotes omitted):

In 1872, fencing capital stock in the United States was roughly equal to the value of all livestock, the national debt, or the railroads; annual fencing repair costs were greater than combined annual tax receipts at all levels of government … Fencing became increasingly costly as settlement moved into areas with little woodland. High transportation costs made it impractical to supply low-woodland areas with enough timber for fencing. Although wood scarcity encouraged experimentation, hedge fences were costly to control and smooth iron fences could be broken by animals and were prone to rust. Writers in agricultural journals argued that the major barrier to settlement was the lack of timber for fencing: the Union Agriculturist and Western Prairie Farmer in 1841, the Prairie Farmer in 1848, and the Iowa Homestead in 1863 … Farmers mainly adjusted to fencing material shortages by settling in areas with nearby timber plots.\”

Then in 1874, Joseph Glidden patented \”the most practical and ultimately successful design for
barbed wire.\” The fencing business took off. Hornbeck quotes a story from a 1931 history:  “Glidden himself could hardly realize the magnitude of his business. One day he received an order for a
hundred tons; ‘he was dumbfounded and telegraphed to the purchaser asking if his order should not read one hundred pounds\’\”.

Remember that fencing was already of central importance to the U.S. capital stock in 1872. Hornbeck presents estimates of how the total stock of fencing expanded over the decades. The pent-up demand was enormous, and cheaper steel was becoming widely available after the 1870s. From 1880 to 1900, for example, the total amount of fencing in Prairie states went from 80 million rods (where a rod equals 16.5 feet or about 5 meters) to 607 million rods; in the Southwest region, the rise was from 162 million rods in 1880 to 710 million rods by 1900. In the South Central states, the gains were comparatively smaller, only about a doubling from 344 million rods in 1880 to 685 million rods in 1900. By comparing across regions with and without fencing, as the fencing arrived, Hornbeck argues:

\”Barbed wire may affect cattle production and county specialization through multiple channels, but these results suggest that barbed wire’s effects are not simply the direct technological benefits that would be expected for an isolated farm. On the contrary, it appears that barbed wire affected agricultural development largely by reducing the threat of encroachment by others’ cattle.\”

The juxtaposition between 19th century fencing and 21st century information technology offers an irresistible chance for loose speculations and comparisons. Fencing in the 19th century made property rights to U.S. land more valuable, especially in the Prairie and Southwest regions, because it protected the farmers crops. Of course, there was also considerable conflict and dislocation as the land was fenced, including conflicts between farmers and ranchers and between settlers and Native Americans. But for many Americans, the fencing of the American West felt like a clear-cut opening of productive opportunities.

The economic gains from modern information technology often seem to arrive in less clear form. True, for some workers the vast gains of electronic technology feel like a brand-new frontier. But many workers throughout the economy experience information technology as a continual mix of gains, costs, and disruptions. For example, email is great; and email eats up my day. Information technology can offer vast cost savings in office-work, greater efficiency in logistics and shipping, and faster development of new designs and technologies–all of which also disrupt companies and workers.

New information technology is far more mutable than fencing: it finds ways to slither into aspects of almost every job, including how that job is scheduled, organized, and paid for. Moreover, information technology is really a series of new technologies, as Moore\’s law drives the cost of computing lower and lower, creating waves of distinctively different growth opportunities. As Hornbeck points out, barbed-wire fencing did get substantially cheaper over time, with the cost falling by half from 1874 to 1880, and then again almost another two-thirds by 1890, and falling almost to half of that amount by 1897. But that impressive technological record is dwarfed by the productivity gains in information technology.

In short, 19th-century fencing may well have been an investment similar in relative size to modern information technology (although the economic statistics of the late 19th century don\’t allow anything resembling an apples-to-apples comparison). But at least to me, information technology seems considerably more disruptive, transformative, and ultimately beneficial for the economy.

Shaping the Direction of Health Care Innovation

My hope would be that the health care innovations of the future focus on two goals: how to attain improvements in health across the population, and how to provide the same or more effective health care at lower cost. My worry is that the direction of health care innovation is shaped by incentives related to beliefs about what can be brought to market and what will be demanded by patients and received with favor by health care providers  that are not necessarily well-aligned with these goals. Steven Garber, Susan M. Gates, Emmett B. Keeler, Mary E. Vaiana, Andrew W. Mulcahy, Christopher Lau and Arthur L. Kellermann tackle these issues in \”Redirecting Innovation in U.S. Health Care: Options to Decrease Spending and Increase Value,\” a report from the RAND Corporation.

The authors point out that since the 1950s, growth in U.S. health care spending has typically been about 2% per year faster than growth in GDP, and that most economists trace this cost difference to the continual arrival of new and more expensive health care technologies. They write: \” As we argue in this report, the U.S. health care system provides strong incentives for U.S. medical product innovators to invent high-cost products and provides relatively weak incentives to invent low-cost ones.\” The system also provides strong incentive to focus on drugs, devices, and health information technologies that will generate profits in high-income countries, not to find low-cost ways of addressing health problems in the rest of the world. Here are four of the examples they offer.

The cardiovascular “polypill” \”refers to a multidrug combination pill intended to reduce blood pressure and cholesterol, known risk factors for the development of cardiovascular disease. The rationale is that combining four beneficial drugs in low doses in a single pill should produce an easy and affordable way to dramatically modify cardiovascular risk.\” But as the authors point out, even though a \”polypill\” only combines existing drugs, putting them in a single pill means that it would have to go through very expensive and length health and safety testing. The result would be a product that might be cheaper and more effective, but given that people could still take a handful of the other pills, the \”polypill\” would almost certainly be a low-profit product. Moreover, there have been several patents granted on aspects of a \”polypill,\” so any company seeking to test such a pill would be likely to face a patent battle. No private company is likely to push this kind of innovation.

Better use of health information technology in patient records could save a lot of money in terms of lower paperwork costs, and also provide considerable health benefits by informing health care provides about past and current health experiences–for example, thus helping to minimize risks of allergic reactions or bad drug interactions. But despite various pushes and shoves, the health care sector has not been a leader in adopting and using information technology. Indeed, in many cases it seems to have soaked up the time of health care providers on one hand, while providing a tool for increasing the amount billed to insurance companies on the other hand.

The implantable cardioverter-defibrillator (ICD) is \”an implantable device consisting of a small pulse generator (roughly half the size of a smartphone) and one or more thin wire leads threaded through large blood vessels into the heart. ICDs are designed to sense a life-threatening cardiac arrhythmia and automatically provide a dose of direct current (DC) electricity to jolt the patient’s heart back to normal.\” This technology works very well for some patients with heart disease, but not for others: specifically, it isn\’t recommended for \”such as patients who are undergoing bypass surgery or in the early period following a heart attack, the first three months following coronary revascularization, severe heart failure (New York Heart Association Class IV), and those with newly diagnosed heart failure.\” Thus, this is a case of a positive and useful innovation that is quite likely overused–at substantial cost.

Prostate-specific antigen (PSA) is a test for whether men have prostate cancer. The authors write: \”Despite PSA screening’s initial promise, multiple studies in the United States and in Europe have found that it does not reduce prostate cancer–specific mortality. Moreover, screening is associated with substantial harms caused by over-diagnosis and the complications that can occur from aggressive treatment. . . . Based on unfavorable findings, in 2012 the United States Preventive Services Task Force recommended against routine PSA screening for prostate cancer because the harms of screening outweigh the potential benefits. However, because federal law has not been changed, Medicare must still pay for the test’s use, as well as for the subsequent biopsies, surgical procedures, nonsurgical treatments, and complications that these procedures can cause.\”

The RAND authors point out a number of features of the U.S. health care system that can push innovation away from the methods that would  most improve health and decrease costs. For example, the existing incentives for innovation don\’t tend to reward methods that will lead to reduced spending. As they note, in a market full of insured third-party payers, there is \”[l]imited price sensitivity on the part of consumers and payers. In addition, a bias arises from the  \”limited time horizon of providers when they decide which medical products to use for which patients: In many instances, the health benefits from using a drug, device, or HIT are not realized until years in the future, at which time the patient is likely to be covered by a different insurer, such as Medicare. When this is the case, only the later insurer will obtain the financial benefits associated with the (long-delayed) health benefits.\” More broadly, \”[m]any [health care] provider systems are siloed. When this is the case, most decisionmakers consider only the costs and benefits for their parts of their organizations, and few take into account savings that accrue outside of their silos.\”

They also write of \”treatment creep\” and the \”medical arms race.\”

\”Undesirable treatment creep often occurs when a medical product that provides substantial benefits to some patients is used for other patients for whom the health benefits are much smaller or completely absent. Treatment creep is encouraged by FFS [fee-for-service] payment arrangements, and it is enabled by lack of knowledge about which patients would truly benefit from which products. Treatment creep often involves using products for indications not approved by the FDA. Such “off-label” use—which delivers good value in some instances—is widespread and difficult to control. Treatment creep may reward developers with additional profits for inventing products whose use can be expanded to groups of patients who will benefit little. …\” 

\”The “medical arms race” refers to hospitals and other facilities competing for business by making themselves attractive to physicians, who may care more about using new high-tech services than they care about lower prices. … Robotic surgery for prostate cancer and proton beam radiation therapy provide striking examples of undesirable treatment creep: Although there is little or no evidence that they are superior to traditional treatments, these high-cost technologies have been successfully marketed directly to patients, hospitals, and physicians. High market rewards for such expensive technologies encourage inventors and investors to develop more of them—regardless of how much they improve health.\”

The authors have an eminently reasonable list of ways to alter the direction  of health care innovation: basically, thinking through the sources of R&D funding, regulatory approval, and decision-making by third-party payers. For example, there could be public prize contests for certain innovations, or some patents that seem to offer substantial health benefits could be bought out and placed in the public domain, and third-party payers (including Medicare and Medicaid) could place more emphasis on being willing to buy new technologies that cut costs. But I confess that as I look over their list of policy recommendations, I\’m not sure they suffice to overcome the incentives currently built into the U.S. healthcare system.

And Here Come the Interest Payments

The federal government has been on a borrowing binge since the start of the Great Recession. I\’ve argued that in the short-run, the path of the budget deficits has been basically correct, because the deficits have helped to cushion the brutal economy of 2008-2009 and the sluggish recovery since then. But the long-term budget deficit picture is a problem.  And even those of us who have largely supported the budget deficits of the last few years need to face that fact that the bills will eventually come due, and interest payments by the federal government are likely to head sharply upward in the next few years.

For some perspective, here\’s a figure from the August 2014 Congressional Budget Office report, \”An Update to the  Budget and  Economic Outlook:  2014 to 2024.\” The spending categories are expressed as a share of GDP. Thus, over the next decade Social Security and Major Health Care programs rise, and a number of other categories fall a bit. But the biggest spending jump in any of these categories is for interest payments.

Interest payments jump for two reasons: the recent accumulation of federal debt and the expectation that interest rates are going to rise. \”Between calendar years 2014 and 2019, CBO expects, the interest rate on 3-month Treasury bills will rise from 0.1 percent to 3.5 percent and the rate on 10-year Treasury notes will rise from 2.8 percent to 4.7 percent; both will remain at those levels through 2024.\” Of course, predictions don\’t always come true. But the CBO has already scaled down how much it expects interest rates to rise, and its projections of future deficits may well be on the optimistic side.

When looking at spending as a share of GDP, it\’s useful to remember that the GDP is now around $17 trillion. This prediction shows a rise in federal interest payments from 1.3 percent of GDP in 2014 to 3.0 percent of GDP by 2024. Converted to actual dollars, this prediction means that the projected rise in interest payments from from $231 billion in 2014 to $799 billion in 2024–more than tripling in unadjusted dollars. By 2024, that\’s going to be $568 billion per year that isn\’t available for other spending or to finance tax cuts. It\’s going to bite  hard.

For an historical comparison, a December 2010 CBO report looked at \”Federal Debt and
Interest Costs.\” The light blue line shows interest payments in nominal dollars, not adjusted for inflation or the size of the economy, and thus isn\’t useful for looking back several decades. The dark blue line helps to illustrate rise in interest rates is headed for its highest levels since we were paying off the government borrowing of the mid-1980s at relatively high interest rates into the mid-1990s.

When economic times are dire, as they were in the U.S. economy in 2008-2009, having the government borrow money makes sense. Given the lethargic pace of the growth that followed, and the underlying financial fragility of the economy, it made some sense not to make a dramatic push for lower deficits in the last few years. But the coming surge in interest payments is a warning signal that it\’s past time to start thinking about how to bring down budget deficits in the middle and longer-term.

Competition as a Form of Cooperation

Like most economists, I find myself from time to time confronting the complaint that economics is all about competition, when we should be emphasizing cooperation instead. One standard response to this concern focuses on making a distinction between the way people and firms actually behave and the ways in which moralists might prefer that they behave. But I often try a different answer, pointing out that the idea of cooperation is actually embedded in the meaning of the word \”compete.\”

Check the etymology of \”compete\” in the Oxford English Dictionary. It tells you that the word derives from Latin, in which \”com-\” means \”together\” and \”petĕre\” has a variety of meanings, which include \”to fall upon, assail, aim at, make for, try to reach, strive after, sue for, solicit, ask, seek.\” Based on this derivation, valid meanings of competition would be  \”to aim at together,\” \”to try to reach together\” and \”to strive after together.\” 
Competition can come in many forms. The kind of market competition that economists typically invoke is not about wolves competing in a pen full of sheep, nor is it competition between weeds to choke the flowerbed. The market-based competition envisioned in economics is disciplined by rules and reputations, and those who break the rules through fraud or theft or manipulation are clearly viewed as outside the shared process of competition. Market-based competition is closer in spirit to the interaction between Olympic figure-skaters, in which pressure from other competitors and from outside judges pushes individuals to strive for doing the old and familiar better, along with seeking out new innovations. Sure, the figure-skaters are trying their hardest to win. But in a broader sense, their process of training and coming together under agreed-upon rules is a deeply cooperative and shared enterprise.  

In fact, competition within a market context actually happens as a series of cooperative decisions, every time a buyer and seller come together in a mutually agreed and voluntarily made transaction. This idea of cooperation within the market is at the heart of what the philosopher Robert Nozick in his 1974 work Anarchy, State, Utopia referred to as “capitalist acts between consenting adults.”

Attendance Rates for U.S. K-12 Teachers

My heart always sinks a bit when one my children reports over dinner that their class had a substitute teacher that day. What usually follows is a discussion of the video they watched, or the worksheet they filled out, or how many other children in the class (never mine, of course) misbehaved. How prevalent is teacher absence from classes in the U.S.? The National Council on Teacher Quality collects some evidence in its June 2014 report \”Roll call: The importance
of teacher attendance.

The study collected data from 40 urban school districts across the United States for the 2012-13 school year. The definition of \”absence\” in this study was that a substitute teacher was used in the classroom. Thus, the overall totals mix together the times when a teacher was absent from the classroom for sickness, for other personal leave, and for some kind of professional development. As the authors of the study note: \”Importantly, we looked only at short-term absences, which are absences of 1 to 10 consecutive days. Long-term absences (absences lasting more than 10 consecutive days) were not included to exclude leave taken for serious illness and maternity/paternity leave.\”

The average teachers across these 40 districts was absent 11 days during the school year. This amount of teacher absence matters to students. The NCTQ study cites studies to make the point: \”
As common sense suggests, teacher attendance is directly related to student outcomes: the more
teachers are absent, the more their students’ achievement suffers. When teachers are absent 10 days, the decrease in student achievement is equivalent to the difference between having a brand new teacher and one with two or three years more experience.\”

Here\’s a figure showing average rates of absence across the 40 districts. Again, these include professional development activities that take teachers our of the classroom, but do not include long-term absences or parental leave. Indianapolis, the District of Columbia, Louisville, and Milwaukee lead the way with relatively few teacher absences, while Cleveland, Columbus (what\’s with Ohio teachers?), Nashville, and Portland have relatively high numbers of teacher absences.

Based on little more than my own gut reaction, an average of 11 teacher absences per year seems a little on the high side to me. But as with so many issues in education, the real problem doesn\’t lie with the averages, but with the tail end of the distribution. The study calculates that 28% of teachers are \”frequently absent,\” meaning that they missed 11-17 days of class, and an additional 16% are \”chronically absent,\” meaning that they missed 18 or more days of class.

Here\’s a city-by-city charts showing the breakdown of teacher absence by category.

I\’m willing to cut some slack to teachers who happen to have a lousy personal year and are chronically absent. But I have a hard time believing that across the United States, 1/6 of all teachers–that is, about 16%–are simultaneously having the kind of a lousy year that forces them to miss 18 or more school days. (Again, remember that these numbers don\’t include long-term sickness or parental leave.)  Those who can\’t find a way to show up for the job of classroom teacher, year after year, need to face some consequences.

A few years back in the Winter 2006 issue of the Journal of Economic Perspectives, Nazmul Chaudhury, Jeffrey Hammer, Michael Kremer, Karthik Muralidharan, and F. Halsey Rogers reported on \”Missing in Action: Teacher and Health Worker Absence in Developing Countries.\” They wrote: \”In this paper, we report results from surveys in which enumerators made unannounced visits to primary schools and health clinics in Bangladesh, Ecuador, India, Indonesia, Peru and Uganda and recorded whether they found teachers and health workers in the facilities. Averaging across the countries, about 19 percent of teachers and 35 percent of health workers were absent. The survey focused on whether providers were present in their facilities, but since many providers who were at their facilities were not working, even these figures may present too favorable a picture.\” The situation with U.S. teacher absence isn\’t directly comparable, of course. One suspect that the provision of substitute teachers is a lot better in Cleveland, Columbus, Nashville, and Portland than in Bangladesh, Ecuador, India, and Indonesia. Still, wherever it occurs, an institutional culture where many teachers don\’t show up needs to be confronted.

The Origins of Labor Day

It\’s clear that the first Labor Day celebration was held on Tuesday, September 5, 1882, and organized by the Central Labor Union, an early trade union organization operating in the greater New York City area in the 1880s. By the early 1890s, more than 20 states had adopted the holiday. On June 28, 1894, President Grover Cleveland signed into law: \’\’The first Monday of September in each year, being the day celebrated and known as Labor\’s Holiday, is hereby made a legal public holiday, to all intents and purposes, in the same manner as Christmas, the first day of January, the twenty-second day of February, the thirtieth day of May, and the fourth day of July are now made by law public holidays.\” (Note: This post has been reprinted on this blog each year since 2011.)

What is less well-known, at least to me, is that the very first Labor Day parade almost didn\’t happen, and that historians now dispute which person is most responsible for that first Labor Day. The U.S. Department of Labor tells how first Labor Day almost didn\’t happen, for lack of a band:

\”On the morning of September 5, 1882, a crowd of spectators filled the sidewalks of lower Manhattan near city hall and along Broadway. They had come early, well before the Labor Day Parade marchers, to claim the best vantage points from which to view the first Labor Day Parade. A newspaper account of the day described \”…men on horseback, men wearing regalia, men with society aprons, and men with flags, musical instruments, badges, and all the other paraphernalia of a procession.

The police, wary that a riot would break out, were out in force that morning as well. By 9 a.m., columns of police and club-wielding officers on horseback surrounded city hall. By 10 a.m., the Grand Marshall of the parade, William McCabe, his aides and their police escort were all in place for the start of the parade. There was only one problem: none of the men had moved. The few marchers that had shown up had no music.

According to McCabe, the spectators began to suggest that he give up the idea of parading, but he was determined to start on time with the few marchers that had shown up. Suddenly, Mathew Maguire of the Central Labor Union of New York (and probably the father of Labor Day) ran across the lawn and told McCabe that two hundred marchers from the Jewelers Union of Newark Two had just crossed the ferry — and they had a band!

Just after 10 a.m., the marching jewelers turned onto lower Broadway — they were playing \”When I First Put This Uniform On,\” from Patience, an opera by Gilbert and Sullivan. The police escort then took its place in the street. When the jewelers marched past McCabe and his aides, they followed in behind. Then, spectators began to join the march. Eventually there were 700 men in line in the first of three divisions of Labor Day marchers.

With all of the pieces in place, the parade marched through lower Manhattan. The New York Tribune reported that, \”The windows and roofs and even the lamp posts and awning frames were occupied by persons anxious to get a good view of the first parade in New York of workingmen of all trades united in one organization.

At noon, the marchers arrived at Reservoir Park, the termination point of the parade. While some returned to work, most continued on to the post-parade party at Wendel\’s Elm Park at 92nd Street and Ninth Avenue; even some unions that had not participated in the parade showed up to join in the post-parade festivities that included speeches, a picnic, an abundance of cigars and, \”Lager beer kegs… mounted in every conceivable place.

From 1 p.m. until 9 p.m. that night, nearly 25,000 union members and their families filled the park and celebrated the very first, and almost entirely disastrous, Labor Day.\”

As to the originator of Labor Day, the traditional story I learned back in the day gave credit to Peter McGuire, the founder of the Carpenters Union and a co-founder of the American Federation of Labor. At a meeting of the Central Labor Union of New York on May 8, 1882, the story went, he recommended that Labor Day be designated to honor \”those who from rude nature have delved and carved all the grandeur we behold.\” McGuire also typically received credit for suggesting the first Monday in September for the holiday, \”as it would come at the most pleasant season of the year, nearly midway between the Fourth of July and Thanksgiving, and would fill a wide gap in the chronology of legal holidays.\” He envisioned that the day would begin with a parade, \”which would publicly show the strength and esprit de corps of the trade and labor organizations,\” and then continue with \”a picnic or festival in some grove.

But in recent years, the International Association of Machinists have also staked their claim, because one of their members named Matthew Maguire, a machinist, was serving as secretary of the Central Labor Union in New York in 1882 and clearly played a major role in organizing the day. The U.S. Department of Labor has a quick summary of the controversy.

\”According to the New Jersey Historical Society, after President Cleveland signed into law the creation of a national Labor Day, The Paterson (N.J.) Morning Call published an opinion piece entitled, \”Honor to Whom Honor is Due,\” which stated that \”the souvenir pen should go to Alderman Matthew Maguire of this city, who is the undisputed author of Labor Day as a holiday.\” This editorial also referred to Maguire as the \”Father of the Labor Day holiday.

So why has Matthew Maguire been overlooked as the \”Father of Labor Day\”? According to The First Labor Day Parade, by Ted Watts, Maguire held some political beliefs that were considered fairly radical for the day and also for Samuel Gompers and his American Federation of Labor. Allegedly, Gompers did not want Labor Day to become associated with the sort of \”radical\” politics of Matthew Maguire, so in a 1897 interview, Gompers\’ close friend Peter J. McGuire was assigned the credit for the origination of Labor Day.\”

Richard Timberlake and the Case for Monetary Rules

Renee Haltom interviewed Richard Timberlake, perhaps best-known as a staunch supporter of fixed rules rather than government discretion for monetary policy, in Econ Focus, a publication of the Federal Reserve Bank of Richmond (First Quarter 2014, pp. 24-29). Here\’s a sample of Timberlake\’s views:

he argues that the Fed is inevitably subject to political influence.

\”Until maybe 10 or 20 years ago, economists who studied money felt that they could prescribe some logical policy for the Federal Reserve, and ultimately the Fed would see the light and follow it. That proved illusory. A central bank is essentially a government agency, no matter who “owns” it. The Fed’s titular owners are the member banks, but the national government has all the controls over the Fed’s policies and profits. And as with all government agencies, the Fed is subject to public choice pressures and motives.\”

If the Federal Reserve followed a firm rule, he argues, asset bubbles would be unlikely.

The Fed shouldn’t pay any heed at all to asset bubbles. If it followed rigorously a constrained price level, or quantity-of-money rule, I don’t think there would be bubbles. Markets would anticipate stability. Markets today, however, anticipate, with good reason, all the government interventions that lead to bubbles. If we had a stable price level policy and everybody understood it and believed it would continue, there wouldn’t be any serious bubbles. We don’t even know whether the 1929 “bubble” was even a bubble, because after the Fed’s unwitting destruction of bank credit, no one could distinguish in the rubble what was sound from what might have been unsound.

If lender of last resort services are needed, he argues, the private sector could provide them.

Private institutions will always furnish lender of last resort services if markets are free to operate and if there are no government policies in place that cause destabilization. In the last half of the 19th century, the private clearinghouse system was a lender of last resort that worked perfectly. Its activities demonstrated that private markets handle the lender of last resort function better than any government-sponsored institution.

The overall impression from the interview is that Timberlake is open to a variety of monetary rules, as long as the rules are written in stone. He offers positive remarks about a gold standard, about a monetary policy focused solely on the price level, and a monetary policy that would involve a fixed rate of growth in the money supply. As one example, he cites discussed his reaction to the rule Milton Friedman proposed in the 1970s for a fixed rate of growth in the money supply.

\”Friedman recommended a steadily increasing quantity of money — that is, bank checking deposits and currency —between 2 and 5 percent per year. Prices might rise or fall a little, but everybody would know that things were going to get better or be restrained simply because the Fed had to follow a quantity-of-money rule. I wrote him a letter at the time and remarked, “I agree with your idea of a stable rate of increase in the quantity of money, and I suggest a rate of 3.65 percent per year, and 3.66 percent for leap years — 1/100 of 1 percent per day.”

I can feel the pull of Timberlake\’s view, swirling around my ankles, but I am not persuaded. When you lash yourself to the mast,  as Odysseus did to resist the call of the Sirens, you are indeed constrained from giving in to temptation. But if an unforeseen problem arises while you have lashed yourself to the mast, you are incapacitated from dealing with the problem. As Timberlake readily concedes, having the Federal Reserve surrender all discretion is not at all likely. Thus, the pragmatic questions are about what kinds of constraints on the Fed, including a continual process of transparency and self-explanation, are most useful.

As a coda, Timberlake has a nice story about Milton Friedman offering him some key advice when he was a graduate student.

I recall the time when I presented a potential Ph.D. thesis proposal at Chicago to the economics department. The audience included professors and many able graduate students. I could feel that my presentation was not going over very well. After the ordeal was over, Friedman said to me, “Come back up to my office.” When we were there, he said, “The committee and the department think that your thesis proposal has less than a 0.5 probability of acceptance.” I knew that was coming, and I despondently replied that I had had a very frustrating time “finding a thesis.” My words suggested that a thesis was a bauble that one found in a desert of intellect that no one else had discovered. It was then that Milton Friedman turned me around and started me on the road to being an economist. “Dick,” he said, “theses are formed, not found.” It was the single most important event in my professional life. I finally could grasp what economic research was supposed to be.

The Secular Stagnation Controversy

For economists, the word \”secular\” isn\’t about a lack of religious belief. Instead, it\’s refers to whether a condition is expected to last for a long and indefinite period–and in particular, a period not related to whether the economy is entering or exiting a recession. Thus, the concept of \”secular stagnation\” is the idea that the U.S. economy is not just suffering through the aftereffects of the Great Recession, but is for some reason entering a longer-term period of stagnant growth. Coen Teulings and Richard Baldwin, who have edited a useful e-book of 13 short essays with a variety of perspectives on Secular Stagnation:  Facts, Causes and Cures. In the overview, they write:  “Secular stagnation, we have learned, is an economist’s Rorschach Test. It  means different things to different people.\”

I\’ve taken a couple of previous cracks at secular stagnation on this blog. I discussed the
original theory of secular stagnation as put forward in 1938 in \”Secular Stagnation: Back to Alvin Hansen\” (December 12, 2013). Hansen was concerned that in the depressed economy of his time, with lower birthrates and a lack of discoveries of new resources and territories, the push of new inventions would not be enough to keep investment levels high and the economy growing. I have also discussed \”Sluggish U.S. Investment\” (June 27, 2014) in the context of a discussion of secular stagnation by Larry Summers.  Here, let me give a sense of how a range of economists are looking at different aspects of the  \”secular stagnation\” issue by quoting (without prejudice against the other essays!) a few sentences from six of the essays.

Larry Summers: \”This chapter explains why a decline in the full-employment real interest rate (FERIR) coupled with low inflation could indefinitely prevent the attainment of full employment.  . . . Broadly, to the extent that secular stagnation is a problem, there are two possible strategies for addressing its pernicious impacts. … The first is to find ways to further reduce real interest rates. These might include operating with a higher inflation rate target so that a zero  nominal rate corresponds to a lower real rate. Or it might include finding ways such  as quantitative easing that operate to reduce credit or term premiums. These strategies have the difficulty of course that even if they increase the level of output, they are also likely to increase financial stability risks, which in turn may have output consequences. … The alternative is to raise demand by increasing investment and reducing saving. … Appropriate strategies will vary from country to country and situation to situation. But they should include increased public investment, reductions in structural barriers to private investment and measures to promote business confidence, a commitment to maintain basic social protections so as to maintain spending power, and measures to reduce inequality and so redistribute income towards those with a higher propensity to spend.\”

Barry Eichengreen: \”Pessimists have been predicting slowing rates of invention and innovation for centuries, and they have been consistently wrong. This chapter argues that if the US does experience secular stagnation over the next decade or two, it will be self-inflicted. The US must address its infrastructure, education, and training needs. Moreover, it must support aggregate demand to repair the damage caused by the Great Recession and bring the long-term unemployed back into the labour market.\”

Robert J Gordon: \”US real GDP has grown at a turtle-like pace of only 2.1% per year in the last four years, despite a rapid decline in the unemployment rate from 10% to 6%. This column argues that US economic growth will continue to be slow for the next 25 to 40 years – not because of a slowdown in technological growth, but rather because of four ‘headwinds’: demographics, education, inequality, and government debt.\”

Paul Krugman: \”Larry Summers’ speech at the IMF’s 2013 Annual Research Conference raised the
spectre of secular stagnation. This chapter outlines three reasons to take this possibility seriously: recent experience suggests the zero lower bound matters more than previously thought; there had been a secular decline in real interest rates even before the Global Crisis; and deleveraging and demographic trends will weaken future demand. Since even unconventional policies may struggle to deal with secular stagnation, a major rethinking of macroeconomic policy is required.\”

Edward L Glaeser: \”US investment and innovation – the most standard ingredients in long-run economic growth – are not declining. The technological world that surrounds us is anything but stagnant. Yet we can have little confidence that the continuing flow of new ideas will solve the US’s most worrying social trend: the 40-year secular rise in the number and share of jobless adults. … The massive secular trend in joblessness is a terrible social problem for the US, and one that the country must try to address. I do not believe that this is a macroeconomic problem that can be solved with more investment or tax cuts alone.  . . . Alongside targeted investments in education and training, radical structural reforms to America’s safety net are needed to ensure it does less to  discourage employment.\”

Gauti B. Eggertsson and Neil Mehrotra: \”Japan’s two-decade-long malaise and the Great Recession have renewed interest in the secular stagnation hypothesis, but until recently this theory has not been explicitly formalised. This chapter explains the core logic of a new model that does just that. In  the model, an increase in inequality, a slowdown in population growth, and a tightening of borrowing limits all reduce the equilibrium real interest rate. Unlike in other recent models, a period of deleveraging puts even more downward pressure on the real interest rate so that it becomes permanently negative.\”

Richard C. Koo: \”The Great Recession is often compared to Japan’s stagnation since 1990 and the Great Depression of the 1930s. This chapter argues that the key feature of these episodes is the bursting of a debt-financed asset bubble, and that such ‘balance sheet recessions’ take a long time to recover from. There is no need to suffer secular stagnation if the government offsets private sector deleveraging with fiscal stimulus. However, until the general public understands the fallacy of composition, democracies will struggle to implement such policies during balance sheet recessions.\”

Volumes like this feel a bit like the parable of the blind men and the elephant, where each man grabs one part of the elephant and then declares what an elephant feels like, depending on whether the man has a leg, tail, trunk, ear, tusk, side, or belly of the elephant. It\’s easy to grab hold of one part of the economy, but it can be difficult to see the interactions across the parts, or to see it as a whole.

Outsource Corporate Boards?

Many economists have been distinctly uncomfortable with the notion of a company owned by shareholders but run by corporate executives hired by a board of directors since at least 1932, when Adolf A. Berle, Jr., and Gardiner C. Means wrote a book called \”The Modern Corporation and Private Property.\” The early decades of the 20th century saw a huge transformation of the ownership of large U.S. companies, away from being owned (or effectively controlled) by a family or an individual, by and toward being owned by shareholders.

\”In 1928, when the project was launched, the financial machinery was developing so rapidly as to indicate that we were in  the throes of a revolution in our institution of private property, at least as applied to industrial economic uses.  … The translation of perhaps two-thirds of the Industrial wealth of the country from individual ownership to ownership by the large, publicly financed corporations vitally changes the lives of property owners, the lives of workers, and the methods of property tenure. The divorce of ownership from control consequent on that process almost necessarily involves a new form of economic organization of society.\” 

The \”separation of ownership and control,\” as it is often called, has been an ongoing problem ever since. The well-founded concern is that the board of directors, which is supposed to function on behalf of the shareholders who technically own the company, is instead effectively chosen by corporate management. There have been periodic pushes for corporate board to have broader representation, or members from outside the circles of that industry, or with greater independence from management. But ultimately, most board members are part-timers who parachute in a few times a year for board meetings. They often lack information and incentives to oversee or tow challenge corporate management effectively.

Stephen M. Bainbridge and M. Todd Henderson offer an alternative vision of how corporate boards might work in \”Boards-R-Us: Reconceptualizing Corporate Boards,\” which appears in the May 2014 issue of the Stanford Law Review. They write (footnotes omitted):

Almost every corporate governance reform proposed over the past several decades has focused on the board of directors. . . .This battle is fought on the grounds of who board members are, whether they are independent, who appoints them, how they are elected, how they are compensated, what the standards for their conduct and liability are, whether there should be more independent directors, what the optimal board size is, and so forth. All of these reforms are an attempt to optimize the monitoring and governance role played by the board. Despite the long and zealous efforts of corporate law reformers to understand and improve the board of directors, there is a gaping hole in the corporate governance literature. No one has yet questioned a fundamental assumption of the current corporate governance model—that is, only individuals, acting as sole proprietors, should provide professional board services. 

Bainbridge and Henderson propose that when a a firm is choosing a board of directors, instead of hiring a group of individuals to be on the board, the firms should be allowed to hire a \”board service provider,\” an outside company that would provide board of director services to the firm. They write:

In other words, just as companies outsource their external audit function to an accounting firm rather than multiple individuals, the board of directors function would be outsourced to a professional services company. To see our idea, imagine a firm, Boards-R-Us, Inc., serving as the board of Acme Co. Instead of Acme shareholders hiring a dozen or so individual sole proprietors to provide board functions, they instead hire one firm—a BSP—to provide those functions, whatever they may be.22 Boards-R-Us would still act through individual agents, but the responsibility for managing a particular firm, within the meaning of state corporate law, would be that of Boards-R-Us the entity. This means, for instance, that a suit by shareholders for breach of the board’s fiduciary duties would be against Boards-R-Us, and not against individuals
or groups of individuals.

A company acting as board service provider would continue to make all the same decisions as a current board of directors: that is, hiring and firing top management, setting compensation, having final approval over major decisions like takeovers and mergers, and so on. As the authors write: \”the basic version of our proposal is substantially similar to the current board model, with the one key difference that the board consists of an “it” instead of a collection of individuals.\” Indeed, in choosing a board of directors, it would be possible to have a slate of individuals run against a board service provider–or against several different board service providers. It would be possible to have a board of directors that was, say, half made up of a board service provider, while the other half was the typical individual board members chosen separately by shareholders.

What\’s the case for believing that, at least for some companies, a board service provider company might be an  improvement? One set of argument is that current boards of directors often face problems of limited time, limited information, and a lack of specialist expertise. A board service provider might be well-positioned to have full-time providers of board services, with access to both internal and external sources of information, and the ability to draw on specialist expertise.

And what about the risk that if we are already worried about mutual backrubs between boards of directors and top management, the problem might get even worse if there was only a single board service provider? This concern seems legitimate, but it\’s worth remembering just how incestuous bad some of the current board situations are. Bainbridge and Henderson remind us that when the board of directors at Disney decided that Michael Eisner deserved $140 million for one year of work, the board included a number of Eisner\’s friends, \”including actor Sidney Poitier, the principal of the elementary school Eisner’s children attended, and the architect who designed one of Eisner’s homes.\” More recently, the media conglomerate IAC, chaired by Barry Diller, \”appointed thirty-one-year-old graduate student Chelsea Clinton to the board. …  [F]ormer board members of IAC include Diller’s wife, the fashion designer Diane von Furstenberg, and General Norman Schwarzkopf, and … the current board also includes von Furstenberg’s son, Alex.\”  

Given that the oversight of current boards of directors is often pretty low, Bainbridge and Henderson argue that board service providers \”would be more accountable than the group of individuals currently providing board services; indeed, we believe that the accountability of the whole would be greater than the sum of the liabilities of the parts.\” They argue that a board service provider might worry more about reputation than a random individual board member, and also that a company providing board services might be more susceptible to legal oversight and liability.

Allowing companies to become board service providers is no magic potion to solve all the problems of corporate governance. But more than 80 years after Berle and Means described the problems that arise from a separation of corporate ownership and control, any new proposals for addressing it are welcome.

\”

From The Economist, August 16.

Does Economics Education Teach Students to Trust?

Last March, I discusses some of the studies on the question, \”Does Economics Make You a Bad Person?\” (March 31, 2014). In the Spring 2014 issue of the American Economist, Bryan C. McCannon offers some additional evidence on the question in: \”Do Economists Play Well With Others? Experimental Evidence on the Relationship between Economics Education and Pro Social Behavior\” (59:1, pp. 27-33). The journal is not freely available on-line, although many readers will have access through a library subscription.

The guts of the paper is an experiment with 147 students \”conducted with undergraduate students at a small, private university in upstate New York.\” McCarron teaches at St. Bonaventure University, so you can draw your own conclusions about the identity of the school. Some of the students had already taken \”a significant amount of coursework in economics,\” some are planning to study economics but haven\’t yet taken economics courses, and some have neither had economics classes nor are planning to take them.

The students participated in a \”trust game,\” which has two players. The first player is given a certain amount of money–in this study, $5. The first player decides how much to give to the second player. But here\’s a twist: the amount given to the second player is tripled. Then, the second player decides whether to give some money back to the first player. The game ends there. The students played the game five times, but with a random and changing selection of opponents each time

Clearly, if the first player fully trusts the second one, the first player will give the full $5 to the second player. The amount will be tripled in transit, and the second player will be able to return the full $5, plus more, to the first player. However, a first player who is less trusting may give less than the full $5, or nothing at all, to the second player, because after all, the second player may just hold on to all the money and not return any of it. Thus, the question is whether students who have taken a lot of economic classes tend to be more or less trusting than other groups.

A typical finding in a trust game is that the first player gives half the money to the second player. The second player then returns about 80% of the money invested, and keeps the rest. Thus, trust often does not pay off for the first player–which helps to explain why they venture to pass along only half of the original sum.

In this study, it turns out that when taking the role of the first player, \”[e]ach class a student takes contributes approximately ten cents more.\” When taking the role of the second player, \”[t]aking more economics courses is associated with escalated rates of reciprocation. Approximately fifteen more cents is given back if given all five dollars, which represents a 3.5% increase.\” McCarron also gave the participants an attitudinal survey before playing the game, and when analyzing the survey results together with the game results, he argues that those who are selecting themselves into economics classes are more likely to practice trust and reciprocity.

This study follows several common patterns in this literature. The group being studied is a relatively small group of students at one institution, so there is a reasonable question about whether the results would generalize to a broader population. The engine of inquiry is a structured \”laboratory experiment,\” in this case the trust game, and so there is a reasonable question about whether the motivations revealed in such studies would show up in other behaviors and contexts.

But although the results of these kinds of studies shouldn\’t be oversold, it\’s not shocking to me to find that those who study economics may be more likely to look at a trust game and see it as an opportunity for a cooperative exchange that can benefit both parties. Indeed, economists may well be more prone than non-economists to seeing the world as a place full of voluntarily agreed transactions that can represent a win for both parties.