For economists, \”prime-age\” refers to the ages between 25-54, which is post-school and pre-retirement for most workers. Didem Tüzemen asks \”Why Are Prime-Age Men Vanishing from the Labor Force?\” in the Economic Review of the Federal Reserve Bank of Kansas City (First Quarter 2018, pp. 5-28). She begins: \”The labor force participation rate for prime-age men (age 25 to 54) in the United States has declined dramatically since the 1960s, but the decline has accelerated more recently. From 1996 to 2016, the share of prime-age men either working or actively looking for work decreased from 91.8 percent to 88.6 percent. In 1996, 4.6 million prime-age men did not participate in the labor force. By 2016, this number had risen to 7.1 million.\”
As Tüzemen shows, this rise in nonparticipation rates of prime-age males is broad-based. If you break down prime-age male labor force participation by education levels (less than high school, only high school, some college, college or more), the nonparticipation level is higher for those with less education, but it\’s up in every education category. If break down the prime-age group into decades (25-34, 35-44, 45-54), then nonpartication is higher in the 45-54 age group, but it\’s been rising in every age category, too.
Perhaps more of a clue comes from the employment survey data tiself. As Tüzemen reports:
Those who report their status as \”not in the labor force” also respond to another question, which asks, “what best describes your situation at this time? For example, are you disabled, ill, in school, taking care of house or family, in retirement, or something else?”
The answers to this survey question suggest that between 1996 and 2016, the share of nonparticipating men who give \”disability\” as an answer has declined, while the share who refer to family responsibilities, taking care of family, and in retirement have all increased.
Of course, these decisions about not being in the labor market are not made in a vacuum, but are presumably also affected by the reality of labor market opportunities. That\’s just a long way of saying that the rise in nonparticipation can involve both decisions about labor supply and realities of labor demand. Tüzemen makes a case that evolving labor demand is probably more important for rising male nonparticipation than choices about labor supply. In particular, she focuses on the \”polarization\” of the labor market–the overall pattern in which low-skill workers do OK, because they are providing personal services that are (at least so far) hard to replace with automation or software, and high-skilled workers do OK, because they are well-positioned to make gains from the use of automation and software, but those in the ranks of the middle-skilled can find themselves at risk.
You can read the article to sort through the details of this argument, but here are a couple of points that caught my eye. One is that while nonparticipation of prime-age males has risen in every education group, the biggest rise is not in the lowest-skill or highest-skill groups, but rather in the middle.
The other point is that many of those currently out of the labor force are not looking to return. When the Great Recession hit from 2007-2009, the share of nonparticipating prime-age men who said they still wanted a job rose sharply. But now, the share of that group that says they want a job has declined back to levels from the early 2000s. This pattern suggests to me that some of the labor market nonparticipants who wanted a job have now returned to the labor market, while others have given up on employment.
To set the stage, here\’s what primary care involves:
\”Primary care clinicians typically treat a variety of conditions, including high blood pressure, diabetes, asthma, depression and anxiety, angina, back pain, arthritis, thyroid dysfunction, and chronic obstructive pulmonary disease. They provide basic maternal and child health care services, including family planning and vaccinations. Primary care lowers health care costs, decreases emergency department visits and hospitalizations, and lowers mortality.\”
Here\’s evidence on the shortage of primary care physicians:
\”The Association of American Medical Colleges (AAMC) estimates that by 2030 we will have up to 49,300 fewer primary care physicians than we will need … Despite decades of effort, the graduate medical education system has not produced enough primary care physicians to meet the American population’s needs. When geographic distribution of primary care medical doctors (PCMDs) is taken into account, the problem begins to feel like a crisis. In 2018 the federal government reported 7,181 Health Professional Shortage Areas in the US and approximately 84 million people with inadequate access to primary care, with 66 percent of primary care access problems in rural areas.\”
Nurse practitioners (NPs) are already a recognized health care specialty, with additional training and autonomy beyond a registered nurse. Here\’s and overview:
\”In the words of the American Association of Nurse Practitioners (AANP): `All NPs must complete a master’s or doctoral degree program, and have advanced clinical training beyond their initial professional registered nurse preparation.\’ Didactic and clinical courses prepare NPs with specialized knowledge and clinical competency to practice in primary care, acute care, and long-term health care settings. NPs assess patients, order and interpret diagnostic tests, make diagnoses, and initiate and manage treatment plans. They also prescribe medications, including controlled substances, in all 50 states and DC, and 50 percent of all NPs have hospital-admitting privileges. The AANP reports that the nation’s 248,000 NPs (87 percent of whom are prepared in primary care) provide one billion patient visits yearly.
\”NPs are prepared in the major primary care specialties—family health (60.6 percent), care of adults and geriatrics (21.3 percent), pediatrics (4.6 percent), and women’s health (3.4 percent)—and provide most of the same services that physicians provide, making them a natural solution to the physician shortage. NPs can also specialize outside primary care, and one in four physician specialty practices in the US employs NPs, including psychiatry, obstetrics and gynecology, cardiology, orthopedic surgery, neurology, dermatology, and gastroenterology practices. Further, NPs are paid less than physicians for providing the same services. Medicare reimburses NPs at 85 percent the rate of physicians, and private payers pay NPs less than physicians. On average, NPs earn $105,000 annually.
\”NPs’ role in primary care dates to the mid-1960s, when a team of physicians and nurses at the University of Colorado developed the concept for a new advanced-practice nurse who would help respond to a shortage of primary care at the time. Since then, numerous studies have assessed the quality of care that NPs provide … and several policy-influencing organizations (such as the National Academy of Medicine, National Governors Association, and the Hamilton Project at the Brookings Institution) have recommended expanding the use of NPs, particularly in primary care. Even the Federal Trade Commission recognizes the role of NPs in alleviating shortages and expanding access to health care services. Most recently, the US Department of Veterans Affairs amended its regulations to permit its nearly 5,800 advanced-practice registered nurses to practice to the full extent of their education, training, and certification regardless of state-level restrictions, with some exceptions pertaining to prescribing and administering controlled substances.\”
So what\’s the problem? A number of states have rules limiting the services that NPs are allowed to provide. And a number of doctors support those rules, in part out of a fear that allowing NPs to do more would reduce their income or even threaten their jobs:
\”A 2012 national survey of PCMDs found that 41 percent reported working in collaborative practice with primary care nurse practitioners (PCNPs) and 77 percent agreed that NPs should practice to the full extent of their education and training. Additionally, 72.5 percent said having more NPs would improve timeliness of care, and 52 percent reported it would improve access to health services. However, about one-third of PCMDs said they believe the expanded use of PCNPs would impair the quality and effectiveness of primary care. The survey also found that 57 percent of PCMDs worried that increasing the supply of PCNPs would decrease their income, and 75 percent said they feared NPs would replace them.\”
It\’s a nice thing that the health care industry provides jobs for so many workers, including doctors. But the fundamental purpose of the industry is not to provide high-paying jobs: it is to provide quality care to patients in a cost-effective manner. As Buerhaus writes:
\”Drop the restrictions on PCNP scope-of-practice! These are regressive policies aimed at ensuring that doctors are not usurped by NPs, which is not a particularly worthwhile public policy concern, especially if it comes at the expense of public health. The evidence presented here suggests that scope-of-practice restrictions do not help keep patients safe. They actually decrease quality of care overall and leave many vulnerable Americans without access to primary care. It is high time these restrictions are seen for what they are: a capitulation to the interests of physicians’ associations.\”
Buerhaus also quotes a 2015 comment from the great health care economist Uwe Reinhardt, who died late last year. Reinhardt said:
\”The doctors are fighting a losing battle. The nurses are like insurgents. They are occasionally beaten back, but they’ll win in the long run. They have economics and common sense on their side.\”
In this arena, it would be nice if economics and common sense could win out a little faster.
\”Two-thirds of those released from prison in the United States will be re-arrested within three years, creating an incarceration cycle that is detrimental to individuals, families, and communities.\” So writes Jennifer L. Doleac in \”Strategies to productively reincorporate the formerly-incarcerated into communities: A review of the literature\” (posted on SSRN, July 21, 2018), Doleac\’s approach is straightforward: look at the studies. In particular, look at fairly recent studies done since 2010 that use a \”randomized controlled trial\” approach–that is, an approach where a group of participants are randomly assigned either to receive a particular program or not to receive it. When this approach is carried out effectively, comparing the \”treatment group\” and the \”control group\” provides a reasonable basis for drawing inferences about what works and what doesn\’t.
Here\’s a list of the interventions on which Doleac finds some fairly recent studies using randomized controlled trial approaches. Some of the studies focus on recidivism, while others look at outcomes like employment or gaining additional education.
I\’ll let you read Doleac\’s literature review for details of individual studies. But I\’ll just notice here that this kind of list does not seek to respect what one expects or hopes might be true.
For example, the \”bad bets\” at the bottom all have their advocates. But based on the evidence, Doleac writes concerning these programs:
\”Many programs focus on increasing employment for people with criminal records, with the hope that access to a steady job will prevent reoffending. This topic has been studied more than others, and the research results are mixed. Transitional jobs programs provide temporary, subsidized jobs and soft-skills training to those trying to transition into the private sector workforce. multiple rigorous studies show that transitional jobs programs are ineffective at increasing post-program employment, and have little to no effect on recidivism. …
\”Given the array of challenges faced by people who cycle through the criminal justice system, a popular approach is to try to address many needs at once. Two evaluations of highly-respected reentry programs providing wrap-around services found little to no effect on subsequent recidivism. More recently, two large-scale evaluations of federal programs funding wrap-around services in communities across the country both found increases in recidivism for the treatment groups. … Together, these studies suggest that these multi-faceted, labor-intensive (and thus expensive) interventions may be trying to do too much and therefore do not do anything well. Since this is a popular approach in cities and counties across the country, leaders should be skeptical about the effectiveness of their current programs.\”
Conversely, here are some comments on what seems most promising, based on the actual studies. From Doleac:
\”Court-issued rehabilitation certi ficates can be presented to employers as a signal of recipients\’ rehabilitation. One study found that court-issued certi ficates increased access to employment for individuals with felony convictions. This could be because they provide valuable information to employers about work-readiness, or because employers perceive the court-issued certi ficates as protection against negligent hiring lawsuits. In either case, this strategy is promising and worth further study. The effect on recidivism is currently unknown. …
\”A large share of people who are arrested and incarcerated suffer from mental illness, and many more are hindered by emotional trauma and poor decision-making strategies. Therapy and counseling could have a meaningful impact on the successful reintegration of these individuals. Programs focused on mental health include cognitive behavioral therapy (CBT) and multisystemic therapy (MST). A growing body of evidence supports CBT as a cost-effective intervention, though the evidence on MST is more mixed and may be context-dependent. In both cases, it is unclear how much effectiveness will fall if programs are scaled up to serve more people: if they require highly-trained psychologists to conduct the sessions, the scalability will be limited. …
\”Diverting low-risk offenders to community supervision instead of incarceration appears to be highly effective. Electronic monitoring is used as an alternative to short incarceration spells in several countries, and in those contexts has reduced recidivism rates and increased economic well-being and educational attainment. Court deferrals–which allow low-risk, non-violent felony defendants to avoid a conviction if they successfully complete probation–reduce recidivism rates and increase employment. And an innovative diversion program for non-violent juvenile offenders that provides group mentoring and instruction in virtue theory was shown to reduce recidivism relative to standard diversion to community service. …
\”Many people coming out of jail or prison may benefit from government or community support, but many others might be better off if we left them alone. (This is especially likely if the programs they would be referred to are not effective.) A diverse set of high-quality studies consider the effects of reducing the intensity of community supervision. All found that reducing intensity of supervision (for example, requiring fewer meetings or check-ins with probation officers) has no impact on recidivism rates, and that it actually reduces recidivism for low-risk boys (age 15 or younger). That is, for less money, and less hassle to those who are court-supervised, we could achieve the same and even better public safety outcomes. This approach is worth exploring in a variety of contexts, and appears to be effective for high-risk as well as low-risk offenders. … At this point, there is substantial evidence, from a variety of contexts, that increasing the intensity of community supervision has no public safety benefi ts and in some cases increases recidivism. It is also more expensive. It is unclear what the optimal amount of supervision is for various types of offenders, but it\’s clearly lower than current levels. …
\”[A]nother policy that has great potential to reduce recidivism and incarceration rates is expanding DNA databases. Two studies show that those charged or convicted of felonies are dramatically less likely to reoffend when they are added to a government DNA database, due to the higher likelihood that they would get caught. Deterring recidivism in this way is extremely cost-effective, and reveals that many offenders do not need additional supports to stay out of trouble.\”
Doleac emphasizes that the evidence on many of these programs is not as strong as one might prefer, and there is certainly room for more research. But I would add that those looking to go beyond research and enact a wide-ranging alteration of policies should be considering the existing research, too.
The Bogleheads believe in Jack Bogle, who \”founded Vanguard in 1974 and introduced the first index mutual fund in 1975.\” An index fund seeks only to mimic the average market return, and thus can do so at very low cost. In contrast, an \”active\” fund looks for ways to beat the market, through picking certain stocks or timing movements in the market, but also charges higher fees.
\”In the field of investment management, nearly all of those experts whom we identify as stars prove to be comets. Rather than being eternal beacons of light, most managers live a transitory existence, illuminating the financial firmament for but a brief moment in time, only to flame out, their ashes drifting gently down to earth. Of course, some outstanding managers remain, but history tells us that they are the exception that proves the rule.\”
\”I don’t like the word `never\’ when it comes to the stock market.\”
\”In the fund business, you get what you don’t pay for.\”
\”Over the long run, a percentage point increase in volatility is meaningless; a percentage point increase in return is priceless.\”
\”It is investor emotions, often inexplicable for individual stocks and for the market alike, that drive the market in the short run, and sometimes for remarkably extended periods. But not forever.\”
\”We must base our asset allocation not on the probabilities of choosing the right allocation, but on the consequences of choosing the wrong allocation.\”
\”While rational expectations can tell us what will happen… they can never tell us when.:
“I built a career out of knowing what I don’t know.”
There is strong evidence that for the average investor, with no special inside knowledge, Indeed, the legendary active investor Warren Buffett has instructions in his will that the money he is leaving to his wife should be invested in a low-cost index fund. Buffett explained a few years ago:
Most investors, of course, have not made the study of business prospects a priority in their lives. If wise, they will conclude that they do not know enough about specific businesses to predict their future earning power.
I have good news for these non-professionals: The typical investor doesn’t need this skill. In aggregate, American business has done wonderfully over time and will continue to do so (though, most assuredly, in unpredictable fits and starts). … The goal of the non-professional should not be to pick winners – neither he nor his “helpers” can do that – but should rather be to own a cross-section of businesses that in aggregate are bound to do well. A low-cost S&P 500 index fund will achieve this goal.
That’s the “what” of investing for the non-professional. The “when” is also important. The main danger is that the timid or beginning investor will enter the market at a time of extreme exuberance and then become disillusioned when paper losses occur. … The antidote to that kind of mistiming is for an investor to accumulate shares over a long period and never to sell when the news is bad and stocks are well off their highs. Following those rules, the “know-nothing” investor who both diversifies and keeps his costs minimal is virtually certain to get satisfactory results. Indeed, the unsophisticated investor who is realistic about his shortcomings is likely to obtain better long-term results than the knowledgeable professional who is blind to even a single weakness. …
My money, I should add, is where my mouth is: What I advise here is essentially identical to certain instructions I’ve laid out in my will. One bequest provides that cash will be delivered to a trustee for my wife’s benefit. … My advice to the trustee could not be more simple: Put 10% of the cash in short-term government bonds and 90% in a very low-cost S&P 500 index fund. (I suggest Vanguard’s.) I believe the trust’s long-term results from this policy will be superior to those attained by most investors – whether pension funds, institutions or individuals – who employ high-fee managers.
I think this advice boils down to: \”If you aren\’t Warren Buffett, or at least a pale imitation of Warren Buffett, you should think seriously about being a Boglehead.\”
Back in the 1970s, the federal government had just recently taken on a primary role in setting and enforcing environmental laws, with a set of amendments in 1970 that greatly expanded the reach of the Clear Air Act and another set of amendments in 1972 that greatly expanded the reach of the Clean Water Act. As far back as the mid-1970s, William Nordhaus was estimating models of energy consumption that explored the lowest-cost ways of keeping CO2 concentrations low in seven different \”reservoirs\” of carbon: \”(i) the troposphere ( 60 meters), (v) the short-term biosphere, (vi) the long-term biosphere, and (vii) the marine biosphere.\”
By the early 1990s, Nordhaus was creating what are called \”Integrated Assessment Models,\” which have become the primary analytical tool for looking at climate change. An IAM breaks up the task of analyzing climate change into three \”modules\”, which the Nobel committee describes in this way:
A carbon-circulation module This describes how global CO2 emissions influence CO2 concentration in the atmosphere. It reflects basic chemistry and describes how CO2 emissions circulate between three carbon reservoirs: the atmosphere; the ocean surface and the biosphere; and the deep oceans. The module’s output is a time path of atmospheric CO2 concentration.
A climate module This describes how the atmospheric concentration of CO2 and other greenhouse gases affects the balance of energy flows to and from Earth. It reflects basic physics and describes changes in the global energy budget over time. The module’s output is a time path for global temperature, the key measure of climate change.
An economic-growth module This describes a global market economy that produces goods using capital and labour, along with energy, as inputs. One portion of this energy comes from fossil fuel, which generates CO2 emissions. This module describes how different climate policies – such as taxes or carbon credits – affect the economy and its CO2 emissions. The module’s output is a time path of GDP, welfare and global CO2 emissions, as well as a time path of the damage caused by climate change.
A number of different IAMs now exist. The usefulness of the framework is that one can plug in a range of assumptions–how much energy will an economy use, how will this affect CO2 in the atmosphere, how will it affect overall climate–and develop a sense of what factors or assumption matter most or least. These are quantitative models: that is, you can plug in a policy like a carbon tax, and then trace through its economic and environmental effects, and consider costs and benefits. Nordhaus offers a readable overview of how this work has developed here, with citations to the underlying academic references.
When I was first being indoctrinated into economics in the late 1970s, the prevailing theories of economic growth were based on the work of Robert Solow (Nobel \’87). A couple of implications of Solow\’s model are relevant here. One is that in Solow\’s approach, the researcher calculated increases in inputs of labor and capital for an economy, and then figured out whether those rising inputs of labor and capital could plausibly explain the overall rise in the overall amount of economic output. In these calculations for the US economy, economic output was rising faster than could be explained by the growth of labor and capital and so the additional residual amount was said to have resulted from a change in \”productivity\” or \”technology\” which needed to be understood in the broadest sense to include not just explicit scientific inventions, but all ways of rearranging inputs to get more output.
This approach was clearly useful, and also clearly limited. Another economists (Moses Abramowitz) liked to say that because it measured technology as the leftover residual from what could not be explained through increases in labor and capital, the discussion of productivity that resulted was \”a measure of our ignorance.\” Others sometimes referred to economic growth in this theory as \”manna from heaven,\” falling upon the economy without much explanation. Others said that technology in this model was a \”black box\”–meaning that the question of how new technology was created was assumed rather than argued.
Solow and other growth theorists working with this approach did derive some predictions about rates of economic growth. For example, they argued that growth depended on rates of investment, and that economies would experience diminishing returns as their capital stock increased. Thus, a low-income country with a low level of capital stock would have higher returns from investment than a high level of capital stock.
But as Paul Romer noted when he began working on technology and economic growth the 1980s, this theory of productivity growth seemed inadequate. There were many examples of low-income countries that were growing quickly, but also many examples of low-income countries growing moderately, slowly, or even negatively. Something more than capital investment seemed important here. In addition,
From the Nobel \”popular science\” report:
\”Romer’s biggest achievement was to open this black box and show how ideas for new goods and services – produced by new technologies – can be created in the market economy. He also demonstrated how such endogenous technological change can shape growth, and which policies are necessary for this process to work well. Romer’s contributions had a massive impact on the feld of economics. His theoretical explanation laid the foundation for research on endogenous growth and the debates generated by his country-wise growth comparisons have ignited new and vibrant empirical research. …
\”Romer believed that a market model for idea creation must allow for the fact that the production of new goods, which are based on ideas, usually has rapidly declining costs: the frst blueprint has a large fxed cost, but replication/reproduction has small marginal costs. Such a cost structure requires that frms charge a markup, i.e. setting the price above the marginal cost, so they recoup the initial fxed cost. Firms must therefore have some monopoly power, which is only possible for sufciently excludable ideas. Romer also showed that growth driven by the accumulation of ideas, unlike growth driven by the accumulation of physical capital, does not have to experience decreasing returns. In other words, ideas-driven growth can be sustained over time.\”
Romer\’s approach is often describe as an \”endogenous growth\” model. The earlier Solow-style approach demonstrated the critical importance of growth in technology and productivity, by showing that it was impossible to explain actual long-run macroeconomic patterns without taking them into account. A Romer-style approach then seeks to explore the determinants of growth, with an emphasis on the economic power of producing and using ideas.
Oddly enough, Nordhaus and Romer published essays on the topics that won the Nobel prize in consecutive issues of the Journal of Economic Perspectives in Fall 1993 and Winter 1994 (full disclosure: where I have worked as Managing Editor of JEP since the start of the journal in 1987). For those who want a dose of the old stuff:
The Nobel Prize in economics will be announced on Monday. Thus, it is perhaps an appropriate time to revisit this post from about four years ago.
______________________
Brian Schmidt was a co-winner of the 2011 Nobel Physics Prize for \”for the discovery of the accelerating expansion of the Universe through observations of distant supernovae.\” This is the discovery that leads physicists to infer the existence of \”dark energy,\” which although we have no direct way to measure or observe it is apparently causing the expansion of the universe to speed up. At the Scientific American blog, Clara Moskowitz reports the story recently told by Schmidt about ttaking his Nobel medal to show his grandmother in Fargo, North Dakota — a city on the eastern edge of North Dakota, on the border with my home state of Minnesota. Fargo has a little more than 100,000 people, which makes it the largest population city in North Dakota. Here\’s how Schmidt tells the story:
“There are a couple of bizarre things that happen. One of the things you get when you win a Nobel Prize is, well, a Nobel Prize. It’s about that big, that thick [he mimes a disk roughly the size of an Olympic medal], weighs a half a pound, and it’s made of gold.
“When I won this, my grandma, who lives in Fargo, North Dakota, wanted to see it. I was coming around so I decided I’d bring my Nobel Prize. You would think that carrying around a Nobel Prize would be uneventful, and it was uneventful, until I tried to leave Fargo with it, and went through the X-ray machine. I could see they were puzzled. It was in my laptop bag. It’s made of gold, so it absorbs all the X-rays—it’s completely black. And they had never seen anything completely black.
“They’re like, ‘Sir, there’s something in your bag.’
I said, ‘Yes, I think it’s this box.’
They said, ‘What’s in the box?’
I said, ‘a large gold medal,’ as one does.
So they opened it up and they said, ‘What’s it made out of?’
I said, ‘gold.’
And they’re like, ‘Uhhhh. Who gave this to you?’
‘The King of Sweden.’
‘Why did he give this to you?’
‘Because I helped discover the expansion rate of the universe was accelerating.’
At which point, they were beginning to lose their sense of humor. I explained to them it was a Nobel Prize, and their main question was, ‘Why were you in Fargo?’”
As the pedants among us never tire of pointing out, the so-called \”Nobel Prize in economics\” is not literally a \”Nobel prize.\” It was not established by the original bequest from Alfred Nobel, but instead was first given in 1969, with the prize money provided by a grant from Sweden\’s central bank as part of the 300th anniversary of the founding of the bank. Thus, the award is officially \”The Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel.\” (Justin Fox gives a nice brief overview of the history here.) Although I am pedantic in many matters, this doesn\’t happen to be one of them, so I will continue following the conventional usage in calling it the \”Nobel prize in economics.\”
Your Majesty, Your Royal Highnesses, Ladies and Gentlemen,
Now that the Nobel Memorial Prize for economic science has been created, one can only be profoundly grateful for having been selected as one of its joint recipients, and the economists certainly have every reason for being grateful to the Swedish Riksbank for regarding their subject as worthy of this high honour.
Yet I must confess that if I had been consulted whether to establish a Nobel Prize in economics, I should have decidedly advised against it.
One reason was that I feared that such a prize, as I believe is true of the activities of some of the great scientific foundations, would tend to accentuate the swings of scientific fashion. This apprehension the selection committee has brilliantly refuted by awarding the prize to one whose views are as unfashionable as mine are.
I do not yet feel equally reassured concerning my second cause of apprehension. It is that the Nobel Prize confers on an individual an authority which in economics no man ought to possess.
This does not matter in the natural sciences. Here the influence exercised by an individual is chiefly an influence on his fellow experts; and they will soon cut him down to size if he exceeds his competence.
But the influence of the economist that mainly matters is an influence over laymen: politicians, journalists, civil servants and the public generally. There is no reason why a man who has made a distinctive contribution to economic science should be omnicompetent on all problems of society – as the press tends to treat him till in the end he may himself be persuaded to believe. One is even made to feel it a public duty to pronounce on problems to which one may not have devoted special attention. I am not sure that it is desirable to strengthen the influence of a few individual economists by such a ceremonial and eye-catching recognition of achievements, perhaps of the distant past.
I am therefore almost inclined to suggest that you require from your laureates an oath of humility, a sort of hippocratic oath, never to exceed in public pronouncements the limits of their competence.
Or you ought at least, on confering the prize, remind the recipient of the sage counsel of one of the great men in our subject, Alfred Marshall, who wrote: \”Students of social science, must fear popular approval: Evil is with them when all men speak well of them\”.
Hayek is quoting from an comment from Marshall which appears in \”In Memoriam: Alfred Marshall,\” a speech given by A.C. Pigou in 1924 and published as part of a Memorials of Alfred Marshall volume in 1925 (pp. 81-90). The fuller quotation attributed to Marshall (on p. 89) is:
Students of social science, must fear popular approval: Evil is with them when all men speak well of them. If there is any set of opinions by the advocacy of which a newspaper can increase its sales, then the student who wishes to leave the world in general and his country in particular better than it would have been if he had not been born, is bound to dwell on the limitations and defects and errors, if any, in that set of opinions: and never to advocate them unconditionally even in ad hoc discussion. It is almost impossible for a student to be a true patriot and to have the reputation of being one in his own time.
The black population is not equally distributed across the United States: not equally across regions of the country, nor within metropolitan areas. This unequal distribution is in substantial part a result of historical event and policy decisions, many of them rooted in racism. As a result, policies that affect certain regions of the country more than others, or certain parts of metropolitan areas more than others, will inevitably have disparate racial effects.
Bradley L. Hardy, Trevon D. Logan, and John Parman lay out the evidence and arguments for these and related claims in \”The Historical Role of Race and Policy for Regional Inequality,\” which appears as a chapter in Place-Based Policies for Shared Economic Growth, edited by Jay Shambaugh and Ryan Nunn (Hamilton Project at the Brookings Institution, September 2018)
Here\’s a map showing the share of the population that is black on a county-by-county basis across the United States. As the figure shows, most of the predominantly black counties are in the southeastern region.
If you compare this map of counties with a high share of black residents with a map showing poverty rates by county, you find considerable overlap. If you compare this map of counties with a high share of black resident with a map showing \”economic mobility\” rates by county, you also find high overlap.
As a measure of economic mobility, the authors cite evidence on \”the mean income rank of black children growing up in a household at the 25th income percentile.\”
\”Under complete economic mobility, the expected income rank of the child will simply be the mean income rank for the population, the 50th percentile. If there is no mobility, the expected income rank of the child will be that of their parents, the 25th percentile in this case. … In counties with a majority black population, a black child born to parents in the 25th income percentile achieves a mean income rank of only 32, barely any movement up the income ladder, while white children from the same counties achieve a mean income rank of 43. Not only do black households tend to live in regions with low incomes, but these regions also experience lower levels of economic mobility, potentially exacerbating regional inequality from one generation to the next. … [B]lack children growing up in the 25th income percentile reach much lower rungs on the income ladder relative to white children growing up at the same income level in the same commuting zone. …\”
The black population is also distributed unequally across metro areas.
\”These black–white differences vary across regions. In the South white and black households are roughly equally likely to live in metropolitan areas: 83 percent of white individuals and 86 percent of black individuals live in metropolitan areas. However, the Northeast and Midwest regions present stark differences in the locations of white and black households. Metropolitan areas contain 96 percent of the black population in the Midwest and 99 percent of the black population in the Northeast. These shares are far lower for the white population, particularly in the Midwest where only 75 percent of the white population lives in metropolitan areas.\”
These differences in the concentration of the black population are of course linked closely to historical patterns. For example, the prevalence of rural counties in the south with a high share of black resident of course echoes the patterns set in the aftermath of slavery. The authors write:
\”As of 1880, 90 percent of the black population still lived in the South and 87 percent of the black population lived in a rural area. In contrast, only 24 percent of the white population lived in the South, and 72 percent of the white population lived in rural areas. This meant that black individuals were disproportionately affected by constraints on economic opportunity in the rural South. Over the second half of the 19th century, incomes in the South and the North diverged significantly, with average income in the South only half of the national average by 1900 ,,, The destruction caused by the Civil War and the emergence of northern manufacturing while the southern economy remained predominantly agricultural contributed to these trends. The black population therefore found itself in a region with far less economic opportunity than the rest of the nation.\”
As the authors also note, the lack of opportunity for rural southern blacks was reinforced by racism by individuals, employers, social institutions and government, which affected education, labor markets, and political participation.
This lack of opportunity stirred what is called the \”Great Migration\” of blacks from the American south to northern cities, a pattern that lasted into the 1960s and which helped to reduce black-white gaps in income and other areas. But when blacks arrived in northern cities, there was rising segregation by race. Some of this was flight of white residents to suburbs outside central cities. When whites moved, the main locations of employers often moved with them.
Another factor was covenants in housing deeds that blocked houses from being sold to non-whites. For the record, such covenants were not \”unenforcable until Supreme Court’s Shelley v. Kraemer decision in 1948. However, racial restrictions were often still written into deeds until it became illegal to do so in 1968 with the passage of the Fair Housing Act.\” It was also common to link the risk of mortgage lending to the racial composition of a neighborhood, which had long-lasting effects on property values and city housing and economic patterns.
In turn, these patterns of black-white segregation within metro areas have been reflected in other social patterns, like inner-city schools with high shares of black students that do not seem to perform very well, and conflicts in inner cities between areas with a high share of black residents that have high levels of conflict with police and high rates of incarceration. As the authors write:
\”Neighborhoods with a significant share of blacks in America’s major cities have lagged white neighborhoods on key socioeconomic indicators since at least the 1970s, including earnings, poverty, educational attainment, and employment. These gaps in neighborhood amenities and neighborhood quality persist into the 2000s.\”
There are complex interactions in between economic forces, social forces, individual decisions and location choices, employer decisions, intentionally racist policies, policies not necessarily rooted in racism but with disparate racial effects, and so on. The authors are suitably restrained in making any attempt in this essay to attempt a grand synthesis. But they are not so much building a specific model as making a broader point. As they write, \”the majority of historical discriminatory policies are off the books.\” But the geographic location patterns of the current black population was heavily shaped over the last century-and-a-half by those policies. Moreover, the geographic location patterns of the black population are closely linked to the continuing inequalities of outcomes experienced by the black population.
International tourism is counted in the official economic statistics as an export industry. We don\’t always think about it that way. But when, say, Chinese tourists in the US purchase goods and services, then Chinese consumers are buying goods and services produced in the United States–which is what \”exports\” means.
\”In 2016, 3.0 million Chinese travelers visited the U.S., an increase of 15 percent from 2015.\”
\”China was the third-largest overseas inbound travel market to the U.S. in 2016.\” Apparently, 12% of all overseas tourist visits to the US originate from the United Kingdom, 9% from Japan, and 8% from China.
\”Travel exports to China (ie: spending by Chinese visitors and students in the U.S., and on U.S. airlines) reached $33.2 billion in 2016, significantly higher than any other country. This includes $12.5 billion in education-related spending by Chinese students in the U.S.\”
\”Average spending per Chinese visitor was $6,900 in 2016, the highest of all international visitors.\” If I\’m reading the footnotes correctly, this number doesn\’t include spending on education.\”
\”Travel is the largest U.S. industry export to China, accounting for nearly 20 percent of all exports of U.S. goods and services to China.\”
As the trade conflicts between the US and China continue, what is the likelihood that China might retaliate by making it harder for Chinese tourists to reach the US? After all, China has used limitations on tourism to put pressure on South Korea, Taiwan, and others.
At least one commenter in the travel industry thinks it unlikely, for several reasons. Many Chinese firms are involved in the Chinese tourism industry, so limiting tourism would hurt them, too. China has been choosing its tariff retaliation targets with some eye to hitting states that supported the election of President Trump, but limits on Chinese tourism to the US would have the biggest effects in California, New York, Illinois, and Massachusetts–none of them Trump strongholds. Finally, cutting Chinese travel to the US would also affect a lot of Chinese firms operating in the US and world markets, as well as Chinese students at US colleges and universities, which does not appear to be a goal of China\’s government.
The papers focus is on monetary and fiscal policy, and mostly don\’t seek to provide an even broader overview of economic evolution in these countries. But the nonspecialist reader interested in general patterns and trends in these countries will still find much of interest. For example, here\’s a snippet from Diego Restuccia on \”The Case of Venezuela\”:
\”In the post-war era, Venezuela represents one of the most dramatic growth experiences in the world. Measured as real gross domestic product (GDP) per capita in international dollars, Venezuela attained levels of more than 80% of that of the US by the end of 1960. It has also experienced one of the most dramatic declines, with levels of relative real GDP per capita reaching less than 30% of that of the US nowadays. …
\”The last period, from 2006 to 2016 deserves special discussion. This is because the crisis that is unfolding is much more closely aligned with the typical crises in Latin America … that is, the link between systematic government deficits, the eventual inability to finance those deficits, and subsequent seigniorage and inflation. This is also a period in which distortions to economic activity have accumulated since the late 1990s and were drastically expanded during this period of time.
\”There are several aspects of the economic environment that are worth mentioning. First, there is extreme intervention of the public sector in economic activity through expropriation of private enterprises and government intervention of goods distribution systems. Decline in private production and the failure of expropriated enterprises have exacerbated the dependence of the economy on imports. Second, this is a period of rising debt, both internal and external, with the internal debt becoming the majority of new debt as external sources of financing have become more limited toward the end of the period. Third, there is a decline in the transparency of debt statistics, as a substantial portion of new debt is not accounted in official statistics, for instance, loans in exchange of future oil (e.g., China) and newly rising debt of the state-owned oil company (PDVSA). Fourth, there was a partial reform of the Central Bank allowing for the discretionary use of foreign reserves. Fifth, there is a changing role of PDVSA’s activities involving large transfers … for social programs; in addition, government intervention in the company’s activities has meant shrinking production capacity and cash flows. As a consequence of these characteristics, and despite one of the largest oil-price booms in recent history, the government has found it harder to obtain new loans with mounting fiscal deficits, resorting to much more substantial seigniorage. This is a period also in which real GDP per capita and labor productivity are contracting, for example, real GDP per capita is essentially the same in 2013 as in 2007, and declined between 2013 to 2016 by 30%.\”
Here\’s the list of papers, with abstracts and links:
___________________________ WORKING PAPER The Case of Argentina Francisco Buera, Sam B. Cook Professor of Economics, Department of Economics, Washington University Juan Pablo Nicolini, Senior Research Economist, Federal Reserve Bank of Minneapolis
In this paper, we review the monetary and fiscal history of Argentina for the period 1960–2017, a time during which Argentina suffered several balance of payments crises, three hyperinflations, two defaults on government debt, and three banking crises. All told, between 1979 and 1991, after several monetary reforms, thirteen zeros had been removed from its currency. We argue that all these events are the symptom of a recurrent problem: Argentina’s unsuccessful attempts to tame the fiscal deficit. An implication of our analysis is that the future economic evolution of Argentina depends greatly on its ability to develop institutions that guarantee that the government does not spend more than its genuine tax revenues over reasonable periods of time. DOWNLOAD PDF WORKING PAPER The Case of Bolivia Timothy J. Kehoe, Advisor, Federal Reserve Bank of Minneapolis Carlos Gustavo Machicado, Senior Researcher, Institute for Advanced Development Studies, Bolivia José Peres Cajías, Professor, Economic History Department, University of Barcelona
After the economic reforms that followed the National Revolution of the 1950s, Bolivia seemed positioned for sustained growth. Indeed, it achieved unprecedented growth during 1960–1977. Mistakes in economic policies, especially the rapid accumulation of debt and a fixed exchange rate policy during the 1970s, led to a debt crisis that began in 1977. From 1977 to 1986, Bolivia lost almost all the gains in GDP per capita that it had achieved since 1960. In 1986, Bolivia started to grow again, interrupted only by the financial crisis of 1998–2002, which was the result of a drop in the availability of external financing. Bolivia has grown since 2002, but government policies since 2006 are reminiscent of the policies of the 1970s that led to the debt crisis, in particular, the accumulation of external debt and the drop in international reserves due to a fixed exchange rate. DOWNLOAD PDF WORKING PAPER The Case of Brazil Márcio Garcia, Associate Professor, PUC-Rio, CNPq and FAPERJ; Coordinator, Brazil Project João Ayres, Research Economist, Inter-American Development Bank Diogo Guillén, Global Head Economist, Itaú Asset Management Patrick Kehoe, Consultant, Federal Reserve Bank of Minneapolis; Frenzel Professor of International Economics, University of Minnesota
Brazil had a long period of high inflation. It peaked around 100% per year in 1964, and accelerated again in the 1970s, reaching levels above 100% on average between 1980 and 1994. This last period coincided with severe balance of payments problems and economic stagnation that followed the external debt crisis in the early 1980s. We show that the high-inflation period (1960-1994) was characterized by a combination of deficits, passive monetary policy, and constraints to debt financing. The transition to the low-inflation period (1995-2016) was characterized by improvements in all those instances, but it did not lead to significant improvements in economic growth. In addition, we document a strong correlation between inflation rates and seigniorage revenues, but observing that the underlying inflation rates are too high for the modest levels of seigniorage revenues. Finally, we discuss the role of monetary passiveness and indexation in accounting for the unique features of the inflation dynamics in Brazil in comparison to the other Latin American countries. DOWNLOAD PDF WORKING PAPER The Case of Chile Rodrigo Caputo, Senior Economist, Central Bank of Chile Diego Saravia, Manager of Economic Research, Research Department of the Central Bank of Chile;
Chile has experienced deep structural changes in the last fifty years. In the 1970s a massive increase in government spending, not financed by an increase in taxes or debt, induced high and unpredictable inflation. Price stability was achieved in the early 1980s, after a fixed exchange rate regime was adopted. This regime, however, generated a sharp real exchange rate appreciation that exacerbated the external imbalances of the economy. The regime was abandoned and nominal devaluations took place. This generated the collapse of the financial system that had to be rescued by the government. There was no debt default, but in order to service the public debt, the fiscal authority had to generate surpluses. Since 1990, this was a systematic policy followed by almost all administrations and helped achieve two different, but related, goals. It contributed to reducing the fiscal debt and enabled the Central Bank to pursue an independent monetary policy aimed at reducing inflation. DOWNLOAD PDF WORKING PAPER The Case of Colombia David Perez-Reyna, Assistant Professor, Department of Economics of Universidad de los Andes, Colombia Daniel Osorio-Rodríguez, Junior Researcher, Monetary and International Investment Division, Banco de la Republica (the Central Bank of Colombia)
In this paper we characterize the joint history of monetary and fiscal policies in Colombia since 1960. We divide our analysis into three periods, which are differentiated by the finance structure of the fiscal deficit, the institutional framework of monetary and fiscal policies, and the levels of inflation: 1960-1970, when both inflation and the fiscal deficit were low on average; 1971-1990, when both inflation and the fiscal deficit increased; and 1991-2017, when despite the highest average fiscal deficit and the worst recession of the century, inflation kept a downward trend in the context of a newly independent Central Bank and increasingly flexible exchange markets. The first two periods were characterized by fiscal dominance, with larger fiscal deficits leading to increased inflation in the context of a nonindependent monetary policy. After 1991, the Constitution enshrined monetary dominance via an independent Central Bank. We observe that although large fiscal deficits, macroeconomic swings and monetary imbalances were rare in Colombia, average economic growth was comparable to other Latin American countries that experienced higher macroeconomic volatility. DOWNLOAD PDF WORKING PAPER The Case of Ecuador Simón Cueva, Regional Academic Director, Laureate Latin America Julían P. Díaz, Assistant Professor, Quinlan School of Business, Loyola University Chicago
We document the main patterns in Ecuador’s fiscal and monetary policy during the 1950-2015 period, and conduct a government’s budget constraint accounting exercise to quantify the sources of deficit financing. We find that, prior to the oil boom of the 1970s, the size of the government and its financing needs were small, and the economy exhibited high growth rates and low inflation. The oil boom led to a massive increase in government spending. The oil prices crash of the early 1980s was not accompanied by any substantial fiscal correction, and the government considerably relied on seigniorage as a source of revenue. This coincided with almost three decades of high inflation rates and stagnant output. The dollarization regime, implemented in 2000, removed the ability of the government to resort to seigniorage to cover its imbalances. Indeed, in spite of large deficits registered since 2007, inflation has remained at historically low levels. However, the recent policies of inflated spending and the heavy borrowing needed to finance it remind those that led to the collapse of the economy during the 1980s and 1990s, and generate concerns regarding the long-term sustainability of the dollarization regime, and of the benefits it has provided.
DOWNLOAD PDF WORKING PAPER The Case of Mexico Felipe Meza, Researcher, Centro de Analisis e Investigacion Economica (CAIE); Professor of Economics, Instituto Tecnologico Autonomo de Mexico (ITAM)
The objective of this paper is to analyze the monetary and fiscal history of Mexico using a model of the consolidated budget constraint of the Mexican government as the framework. I assume a small open economy in which the government exports oil. I study the period 1960-2016. I evaluate the ability of the model to explain the crises of 1982 and 1994, and while the model can explain the 1982 debt crisis, it cannot explain the 1994 crisis. A constitutional change in the relationship between the federal government and Banco de México, and policy choices made in the aftermath of the 1994 crisis, are consistent with a transition from fiscal dominance to an independent Central Bank. Inflation fell persistently after 1995, reaching values of 3% per year in mid-2016. That number is the target of the Central Bank. After a long transition following the 1982 crisis, Mexico succeeded in controlling inflation. I discuss forces that reduced inflation over time: a long sequence of primary surpluses, the constitutional change that gave independence and a goal to the Central Bank, and the current inflation targeting regime. On the fiscal side, I observe a change in the downward trend of the total debt-to-GDP ratio, as it fell from the 1980s to 2009, the year in which it started growing persistently until 2016. DOWNLOAD PDF WORKING PAPER The Case of Paraguay Javier Charotti, Researcher, Central Bank of Paraguay Carlos Fernández Valdovinos, President, Central Bank of Paraguay Felipe Gonzalez Soley, Researcher, Central Bank of Paraguay
In this paper we analyze the monetary and fiscal history of Paraguay between 1960 and 2016. The analysis is divided into four periods: Golden years and large external shocks (1962-1980), Fiscal imbalances and nominal instability (1981-1990), Deregulation and financial crisis (1991-2003), and finally, the Period of structural reforms (2004-2016). We observe that the monetary and fiscal policy maintained a conservative stance relative to other Latin American countries with some episodes of fiscal or monetary imbalances. These were a consequence of different factors depending on the period of analysis, among which we can quote: reform of the legal framework of the Central Bank, stabilization plans, credit market, and structural reforms. Finally, compared to most countries in Latin America, Paraguay has not experienced large macroeconomic imbalances, but remains among the countries with the lowest income per capita levels. DOWNLOAD PDF WORKING PAPER The Case of Peru Cesar Martinelli, Professor of Economics, George Mason University Marco Vega, Deputy Manager of Economic Research, Economic Studies Depart., Central Reserve Bank of Peru; Professor, Pontificia Universidad Católica del Perú
We show that Peru’s chronic inflation through the 1970s and 1980s was a result of the need for inflationary taxation in a regime of fiscal dominance of monetary policy. Hyperinflation occurred when further debt accumulation became unavailable, and a populist administration engaged in a counterproductive policy of price controls and loose credit. We interpret the fiscal difficulties preceding the stabilization as a process of social learning to live within the realities of fiscal budget balance. The credibility of policy regime change in the 1990s may be linked ultimately to the change in public opinion, which gave proper incentives to politicians, after the traumatic consequences of the hyper stagflation of 1987- 1990. DOWNLOAD PDF WORKING PAPER The Case of Uruguay Gabriel Oddone, Economic Historian, Universidad de la Republica, Uruguay Joaquín Marandino, Researcher, Universidad Torcuato Di Tella, Argentina
This paper analyzes the monetary and fiscal history of Uruguay between 1960 and 2017. The aim is to explore the links between unfavorable fiscal and monetary policies, nominal instability, and macroeconomic performance. The 1960s is characterized by high inflation and sustained large deficits, and a large banking crisis in 1965. Since the mid-1970s, the government liberalized the economy and attempted to stop the money financing of deficits that prevailed in the previous decade. During the transition to a more open economy, Uruguay encountered two major crises in 1982 and 2002: the former was very costly in fiscal terms and brought back the monetization of deficits, while the latter had significantly lower effects on deficit and inflation. The evidence collected suggests governments have slowly understood the importance of fiscal constraints to guarantee nominal stability. DOWNLOAD PDF WORKING PAPER The Case of Venezuela Diego Restuccia, Professor of Economics, University of Toronto; Research Associate, National Bureau of Economic Research (NBER)
I document the salient features of monetary and fiscal outcomes for the Venezuelan economy during the 1960 to 2016 period. Using the consolidated government budget accounting framework of Chapter 2, I assess the importance of fiscal balance, seigniorage, and growth in accounting for the evolution of debt ratios. I find that extraordinary transfers, mostly associated with unprofitable public enterprises, and not central government primary deficits, account for the increase in financing needs in recent decades. Seigniorage has been a consistent source of financing of deficits and transfers—especially in the last decade—with increases in debt ratios being important in some periods. DOWNLOAD PDF