One tension of modern life is that we love new technology when it makes our lives easier, more fun, safer, or healthier, but we hate new technology when certain familiar skills, accustomed habits, favorite consumables and even our jobs become outdated–and we are forced to change. So how to American view science overall. Here are a couple of figures from The State of U.S. Science and Engineering 2020, by Beethika Khan, Carol Robbins, and Abigail Okrent, just published by the National Science Foundation and the National Science Board (January 2020).
This figure shows responses to four questions. There is 80+% agreement that government should fund basic research, and 90% agreement that science generates opportunities for the next generation, and both sentiments seem to have risen a bit in the last 20 years. On the other side, about 50% of Americans say that \”science makes life change too fast\” and less than 50% \”have a great deal of confidence in the scientific community.\”
One might sum this up by saying that Americans recognize the importance of science in the abstract, but when it comes down to their own daily lives or to actual scientists, they are more skeptical.
Another figure shows a breakdown by education level to these questions. The results are mostly predictable: that is, more education generally makes you more likely to believe science is important, to support government funding for science, and to express confidence in scientists.
But it\’s interesting to note on the question of whether \”science makes life change too fast,\” the skepticism for those with graduate or professional degrees is higher than for those with some college or a bachelor\’s degree.
I have my days and times, like everyone else, when new technology adds stress to my life. I feel a lot of sympathy for people whose employers or jobs are eliminated with the arrival of new technology. But from a broad social point of view, embracing the new opportunities of science and technology is extraordinarily important. Issues ranging from better-paid jobs, effective health care at moderate cost, widespread access to education and training, environmental protection, and many other desirable outcomes are much more likely to be achieved if science and technology keep moving forward in useful directions. Also, many other nations and workforces around the rest of the world want science and technology to make life change even faster, and they are moving in that direction.
Of course, there\’s nothing sacred about looking at average levels over the past 50 years of government spending, taxes, and deficits. But it does at least offer some historical perspective. The CBO report, \”The Budget and Economic Outlook: 2020 to 2030\” (January 2020) gives a look at where the federal budget has been, and a defensible if probably understated look at where it\’s currently headed.
To me, one of the striking facts about the federal budget over the last half-century that you wouldn\’t necessarily guess from the political rhetoric is that it has been so stable. The figure shows federal taxes and spending as a share of GDP since 1970, and the projected out to 2030.
Federal taxes have averaged 17.4% of GDP during that time, and looking at the last 50 years, it\’s been fairly close to that level most of the time. It surges higher in the late 1970s, when inflation was pushing people into higher tax brackets (because tax brackets weren\’t yet adjusted for inflation). It surges higher in the late 1990s, with the boost from higher stock prices and capital gains in the dot-com economy. Taxes drop in the aftermath of the dot-com bust in the early 2000s, and drop when the economy slows in the Great Recession. Still, given all the economic and political changes from 1970 to 2020, and all the complaints about how federal taxes are becoming increasingly burdensome, it\’s striking to me that the federal tax take tended to grow over the long term pretty much in synch with the rest of the economy.
There is one discordant note for the stability of federal tax revenues over time, and it relates to the current tax take and the predictions for the next few years. The usual pattern is that federal tax revenues rise and are above-average during good economic times when incomes are growing, while federal tax revenues fall and are below-average during recessions when the economy shrinks. But in early 2020, the last recession ended more than a decade ago and unemployment rates are at their lowest in 50 years. The CBO projections use a default prediction of a steady-state economy for the next few years. Thus, one would expect federal taxes to be higher than the long-run average now and in the next few years, not below.
On the spending side, the continuity is again striking to me, given the complaints one years about how federal spending has either exploded in size or been slashed into austerity. The half-century average is 20.4% of GDP. Spending is higher in big recessions, like the early 1980s or the Great Recession, and it\’s lower during times of economic booms, like the 1990s. But given all the shifts in federal spending from 1970 to 2020–say, the fall in defense spending as a share of GDP, the creation of Medicare, and now the swelling retirements of the \”baby boom\” generation–the overall stability is remarkable.
Again, there is a mildly discordant note in the present. The current economy is healthy enough that it would seem more plausible for federal spending to be below it\’s long-run average, now and in the next few years, but instead it is already a little above it\’s long-run average, and headed higher.
Putting together the tax and spending leads to budget deficits and surpluses. As the figure shows, the average US budget deficit over the last 50 years has been 3.0% of GDP. The projected deficits for the next decade average 4.8% of GDP–and remember, this is based on the assumption that the economy continues to grow without a recession and with continuing low unemployment (specifically, a rise to 4.4% later in the 2020s) continuing through the decade.
The CBO is constrained by law to make its predictions under the assumption that current legislation will take effect on time and on schedule, which means that its tax revenue estimates are prone to overstatement while its spending numbers are prone to understatement. For example, a big chunk of the predicted rise in tax revenues during the 2020s is that parts of the 2017 tax cut, under current law, expire in 2025. The law was designed that way, so that the long-term projected revenue loss would be limited. But whether those tax cuts are actually allowed to expire is a political decision, and I\’m already hearing some noises about extending the tax cuts further.
On the spending side, the main reasons driving higher spending from 2020 to 2030 are related to rising costs for Social Security and for the major federal health care programs like Medicare, Medicaid, tax credits for health insurance, and the Children\’s Health Insurance Program are already large–given that several of these programs are spending more as the boomer generation retires–and projected to keep rising.
The other big area of rising federal spending is interest payments on past government borrowing. This graph shows the pattern of federal budget surpluses and deficits (as earlier) with the solid black line, but now the deficit/surplus is divided into the \”primary\” deficit, which doesn\’t include interest payments, and the interest payments are shown separately.
Net interest payments were 1.3% of GDP back in 2010, 1.7% of GDP in 2020, and projected for 2.6% of GDP in 2030. To put this in some perspective, the projected spending on interest payments in 2030 will be twice as much as the projected revenue from the corporate income tax in 2030, and slightly higher than projected Medicaid spending in 2030.
Looking at the projected deficits over time leads to projections for the accumulated federal debt. This graph extends to 2050. In 10 years, the projected federal debt (under the probably conservative CBO assumptions is 98% of GDP, roughly similar to the previous debt-to-GDP high at the end of World War II. The projected federal debt keeps rising after that, in part because higher interest payment feed on the higher debt, which in turn leads to still-higher interest payments.
There\’s no hard-and-fast rule as to when government debt becomes too heavy a burden or too severe a risk. The mighty US economy isn\’t Greece or Argentina, but neither is it invulnerable to the tradeoffs of needing to commit ever-additional resources to paying interest on past debt, or the risk that government borrowing may grow large enough that private-sector borrowing is crowded out, or that at some point in the next few decades, investing in US debt may look more risky than it appears today. At a minimum, it\’s undeniably true that annual US deficits as a share of GDP are projected to be well above their historical levels in the next decade, and the accumulation of US debt as a share of GDP, based on current law, is headed for uncharted territories.
Every other year, the National Science Foundation is required to publish a \”Science and Engineering Indicators\” report. For the January 2020 version, the NSF went with a summary report, called \”The State of U.S. Science & Engineering 2020,\” which is then accompanied by seven more detailed reports. Here, I\’ll focus on some evidence from the overview report on the subject of how the US science and engineering workforce depends on immigrants, both as worker and also as students in its higher education system.
Foreign-born workers—ranging from long-term U.S. residents with strong roots in the United States to more recent immigrants—account for 30% of workers in S&E occupations. The number and proportion of the S&E workforce that are foreign born has grown. In many of the broad S&E occupational categories, the higher the degree level, the greater the proportion of the workforce who are foreign born. More than one-half of doctorate holders in engineering and in computer science and mathematics occupations are foreign born (Figure 9).
Within the US higher education system, a disproportionate number of the science and engineering degrees go to immigrants–many of whom then remain in the US economy at least for a time
In the United States, a substantial proportion of S&E doctoral degrees are conferred to international students with temporary visas. In 2017, temporary visa holders earned one-third (34%) of S&E doctoral degrees, a relatively stable proportion over time. They account for half or more of the doctoral degrees awarded in engineering, mathematics and computer sciences, and economics. Three Asian countries—China, India, and South Korea—are the largest source countries and accounted for just over half (54%) of all international recipients of U.S. S&E research doctoral degrees since 2000. By comparison, students on temporary visas earn a smaller share (6% in 2017) of S&E bachelor’s degrees. However, the number of these students has more than doubled over the past 10 years.
A majority of the S&E doctorate recipients with temporary visas—ranging between 64% and 71% between 2003 and 2017—stayed in the United States five years after obtaining their degree. Those from China and India, however, saw a decline in their respective “stay rates” from 93% and 90%, respectively, in 2003 to 84% and 85%, respectively, in 2013; the rates remained stable from 2013 through 2017. The stay rate increased for those from South Korea (from 36% in 2003 to 57% in 2017). Stay rates also vary by field of doctoral degree. Among S&E doctorate recipients, social sciences (52%) has a lower stay rate than the average across all fields (71% in 2017).
I know that we live in times of substantial concern about how technology may be transferred from the United States to other countries. Whatever one\’s concerns in that area, there is also another side to consider, which is that in raw numerical terms the US has a heavy dependence on imported science and engineering workers and American universities in science and engineering fields maintain their status and preeminence thank in substantial part to their foreign students.
The NSF report also points out that when it comes to sheer numbers of science and engineering degrees, the dominant position of the US and European economies is eroding. For example, here are the trends in the number of university degrees given in science and engineering. That rising purple line is the total for China, taking off. Concerns are sometimes raised that a number of these degrees in China may not represent especially high quality achievement in learning. But the sheer numbers are nonetheless striking–and the quality does seem to be improving over time.
A similar if less vivid trend is apparent in granting of doctoral degrees in science and engineering fields. Here, the large EU economies have a lead compared to the US, with China and India rising rapidly
The economic competitiveness of nations is built to a substantial extent on the talents of their workforce. It seems likely that future economic growth for the US and other advanced economies will depend heavily on the industries built on advances in science and engineering. One of the great competitive advantages for the US economy has been that its education system, economy, and society are attractive and open to so many workers and students from around the world with skills in these areas.
Back in September 2017, Amazon announced that it was planning to build two new headquarters, and solicited proposals from cities. After due consideration of several hundred proposals, Amazon announced that the two winning locations just happened to be an easy commute from two homes of Amazon CEO Jeff Bezos: in Washington, DC, and New York City. It\’s easy to be cynical Amazon\’s process, and I have been. But here\’s an awkward question: What if in a pure business sense, Amazon made the right decision?
After all, we know that technology companies like to have a large available pool of local talent, and New York City and Washington, DC, rank at the top of all metro areas in terms of total number of tech workers. In fact, when you look at the patterns of the US economy in the last couple of decades, there is a growing concentration of tech companies in a few cities. Here\’s an explanation from Enrico Moretti:
I have just finished a new project where I study how locating in a high-tech cluster improves the productivity and creativity of inventors. If you look at the major fields — computer science, semiconductor, biology, and chemistry — you see a concentration of inventors that is staggering. In computer science, the top 10 cities account for 70 percent of all the innovation, as measured by patents. For semiconductors, it\’s 79 percent. For biology and chemistry, it\’s 59 percent. This means that the top 10 cities generate the vast majority of innovation in each field. Importantly, the share of the top 10 cities has been increasing since 1971, indicating increased agglomeration. …
Companies in industries that are very advanced and very specialized find it difficult to locate in areas where they would be isolated. Nobody wants to be the first to move to a city because they\’re going to have a hard time in finding the right type of specialized workers. And it\’s hard for workers with specialized skills to be first because they\’re going to have a hard time finding the right job. It\’s an equilibrium in which areas that have a large share of innovative employers and highly specialized workers tend to attract more of both. It is difficult for areas that don\’t have a large share of innovative employers and highly specialized workers to jump-start that process. Ultimately, that is what generates the divergence across cities. …
This pattern also helps to explain several of the main political and economic divisions of our time. The economies of US regions are no longer converging as they once did. In political terms, one of the most salient divisions of our time is between those who live and work in those cities where new technology companies and economic growth are concentrated, who often see the global economy as a place of opportunity, and those who live in the other cities where tech companies like Amazon and others are choosing not to locate, and who often see the global economy as a threat.
The obvious policy prescription here looks something like this: Boost national research and development spending substantially: after all, a variety of estimates suggest that the social return from more R&D spending is 60%, or that the US should be aiming over time to double its R&D spending. However, when determining the locations for that R&D spending, hold a version of Amazon\’s contest for a new headquarters. The ultimate goal is to increase dramatically the number of US cities that are hubs for the development and commercialization of new technology.
The lesson from the [Amazon] HQ2 bidding experience is clear. Established big tech companies, left to their own devices, will bring a significant number of good jobs–along with congestion, high house prices, and perhaps even more inequality–to a small set of already successful cities, most of which are located on the East or West Coast. This book is for everyone else.
To govern the process so that politicians don\’t just hand out the money to all their favorite constituencies, they propose a Technology Hub Index System (THIS). They focus on cities with a population of at least 100,000 workers from age 25-64, where the college-educated share of such workers is at least 25%, and where the mean home price is less that $265,000, and the commute is less than 30 minutes. They would also include measures of patents/worker in that area, as well as whether the area already has highly-ranked graduate school programs in science and tech areas. The top city in these rankings is Rochester, New York.
Their idea is to combine the THIS index with an application process, where metro areas could apply to receive R&D funds and other kinds of assistance. The metro areas would need to put together a package of how they would spur business growth in their area, including not just technology support but also support for local education, land, and infrastructure. Decisions about who gets the money would be made by an Innovation Committee, structured after the Base Realignment and Closure Commission process that has led to the closure of over 350 military installations. The commission recommends a list of winners, and if the president approves the list, it goes into effect–unless the entire list is rejected by Congress. The idea here is to recognize that any plan like this needs political approval, but to limit the ability of politicians to tinker and micromanage.
Similar ideas have come up in other forums. For example, on January 29 the Brookings Institution held a conference on the subject \”Boosting growth across more of America: Pushing back against the winner-take-most economy\” (audio and transcript available here). In the lead-up to the conference, Mark Muro and Andre Perry wrote:
Can the United States truly prosper when 90% of its R&D- and STEM-intensive “innovation sector” employment growth takes place in just five “superstar” tech hubs? More voices are beginning to doubt it. … [T]ens of millions of Americans are seriously disadvantaged in job opportunities, income mobility, and health and happiness levels simply by virtue of living in a place other than a `superstar\’ hub.
The political prospects for this kind of proposal may not be very robust. Although everyone talks a good game about the importance of technology and science to the future of the US economy, US R&D spending as a share of GDP hasn\’t budged much for decades. If we could agree on how to attain a big boost in R&D spending (presumably through some combination of direct spending and indirect incentives), political representatives of the already tech-successful cities are unlikely to support a focusing that additional funding to other geographical areas. After all, they are more likely to see new tech hubs as potential competitors in a zero-sum game, rather than as ways of deepening the national pool of ideas and talent in a way that will redound to their benefit.
But for the US economy and polity as a whole, policies that set the stage for tech-successful cities can be distributed across the country–something that is just not happening now–may be worthwhile. The goal is to trigger the benefits of agglomeration economies in areas where the costs of agglomeration (congestion, high housing prices, crime) are lower.
In thinking about the state of the economy, it could be useful to know if the number of new business start-ups is trending up or down. Traditionally, the data on business start-ups was only available after a considerable time lag. But the Bureau of the Census is now publishing Business Formation Statistics, which gives quarterly estimates of the patterns of new business start-ups. For example, the BFS report for Fourth Quarter 2019 was published earlier this month on January 15.
The idea here is to get data on new business formation by using data on the number of employers who request \”employer identification numbers\” from the US Census Bureau–which a new business needs for government reporting on taxes and for other reasons. Thus, it provides a forward-looking view of how many firms are likely to be hiring in the near future.
One challenge is that the underlying micro administrative data on businesses are typically not sufficiently timely to generate economic indicators on the current health of the economy. The Business Formation Statistics (BFS) are an important exception; I helped develop them with collaborators from the research community, the Federal Reserve Board, and the Census Bureau. BFS are based on the real-time flow of applications for new employer identification numbers that the Census Bureau receives on an ongoing basis. The potential of the BFS, illustrating new applications that have a high propensity for becoming employer businesses, is shown in Figure 3. This data series, along with other new statistical measures, is now released within a couple of weeks of the end of the most recent quarter at national and state levels. More disaggregated series at sub-state and sector levels can also be constructed. Figure 3 shows that … [h]igh-propensity applications for new businesses in 2019:2 were still 6 percent below the pre-Great Recession levels.
As the figure illustrates, the number of applications by businesses for employer identification numbers plummeted during the Great Recession, and even after a decade without a recession has not yet bounced back to its earlier level. For some earlier posts with other evidence and discussion about this apparently decline in US entrepreneurship, two starting points are:
Concern over how robots and automation can lead to loss of jobs are not new. For example, here\’s a cartoon used to illustrate a New York Times article by Louis Stark back in 1940.
Stark\’s article was called \”Does Machine Displace Men in the Long Run? New Studies Cited as Old Argument is Renewed over Significance of Technological Unemployment\” (February 25, 1940). Here are a few thoughts stimulated by the article.
1) It\’s interesting to note that 80 years ago, the controversy about how thinking robots as show in the cartoon might lead to long-term job loss was described in the headline as an \”old argument.\”
2) The article describes studies about jobs lost to automation in the preceding decades. Here\’s one example from Stark:
The cigar industry is a striking example of mechanization on productivity and employment, for the four-operator cigar machine reduced the amount of labor by 62 per cent as compared with the hand process. In terms of production costs, the reduction in labor time represented a difference in favor of the machine process of at least $3 per thousand of cigars. In 1921 the industry employed 112,000 wage earners … By 1935 it was estimated that 44,000 hand workers had been severed from the industry due to use of the long-filler cigar machines alone and that concurrently jobs had been provided on the machine for 17,000 new workers, mostly unskilled.
3) These shifts were widespread enough that article quotes someone named Corrington Gill. an assistant at the Works Project Administration, to the effect that \”there is little likelihood that more jobs will be available in manufacturing unless there was a substantial gain in production or a further decrease in working hours. Stark added: \”Therefore, he [Gill] felt, manufacturing industries were unlikely to serve as a reservoir of jobs for the growing labor supply.\”
In the short-term, Gill was incorrect, because wartime production boosted the number of manufacturing jobs substantially. But in the long run, he was more correct than perhaps he could have imagined. The blue line in the figure shows the total number of US manufacturing jobs since 1940. The red line shows total US employment. In short, Covington Gill was quite correct in his supposition back in 1940 that manufacturing jobs were not going to absorb future growth of the labor supply over time.
4) The preceding figure is perhaps unfair. The scale needed on the vertical axis to show total jobs over time tends to minimize the rise and fall of manufacturing jobs more specifically. Here\’s a figure just showing total manufacturing jobs.
You can see the sharp rise in manufacturing jobs during World War II, and some additional rise in the 1960s and 1970s. But in a big picture sense, the total number of manufacturing jobs doesn\’t change much from the 1950s up through about 2000, despite all the other dramatic shifts in the US economy during those five decades.
5) At the end of Stark\’s article, he discusses the short-run stresses of long-run changes: \”Those who maintain that there is a real `technological unemployment\’ may agree that there is re-employment `in the long run,\’ but that the displaced wage-earners must eat and care for their families `in the short run.\’\” Writing in 1940, Stark describes a list of policies aimed at smoothing the transition:
Suggested remedies revolve around the problem of how to provide steady income for displaced wage-earners, rather than steady work. Among these proposed remedies are unemployment insurance, old-age pensions, public works programs, minimum-wage and maximum-hour laws, dismissal compensation and proper timing of technological improvements. A number of these are already in effect.
My sense is that no plausible safety net will eliminate the trauma of how technological change can affect workers, jobs, and labor markets. On the other side, technological change is also what makes leads to higher wages, new products, and a higher standard of living. We are further along in trying to spread the benefits and cushion the costs of technological change in 2020 than in 1940, but we remain bedeviled by the same basic trade-offs.
Sometimes the mystery is why something did not happen. The classic statement of this description is in the Arthur Conan Doyle story \”Silver Blaze,\” in which Inspector Gregory asks Sherlock Holmes:
“Is there any point to which you would wish to draw my attention?” “To the curious incident of the dog in the night-time.” “The dog did nothing in the night-time.” “That was the curious incident,” remarked Sherlock Holmes.
When it comes to the US inflation rate, the mystery is that it has moved so little for the last 25 years. The blue line shows the US inflation rate as measured by the Personal Consumption Expenditure price index, leaving out energy and food prices on the ground that they can add short-term volatility that obscures the underlying pattern. If this seems like an odd measure of price inflation–perhaps because the Consumer Price Index is a better-known measure of inflation–I\’ll just say that it\’s measure on which the Federal Reserve focuses (for reasons explained here). The red line shows the unemployment rate.
Back in the late 1980s and early 1990s, the inflation rate reached the range of 4-5%. There was a recession (shaded bar) in 1990-91, and you can see the unemployment rate rise while the inflation rate falls. This pattern is standard intro econ wisdom, going under the name of the \”Phillips curve:\” a tradeoff is expected, at least over the short-run of a few years, because a slowed down recessionary economy will tend to have more unemployment but less inflationary pressure, while an economy in an upswing of economic growth will tend to have lower unemployment but greater inflationary pressures.
But since about 1995, you can see that the inflation rate has moved in a narrow range, rarely venturing out of the range of 1-2% per year, and then by only small amounts. Inflation stays in this range during the late 1990s, as the unemployment rate falls during the dot-com boom; and after the recession of 2001, when the unemployment rate rises; and in the housing boom of the early 2000s, when unemployment is falling; and during the Great Recession of 2007-2009, when unemployment shoots up; and in the decade since then, as the unemployment rate has dropped. Why hasn\’t the inflation dog barked?
For a readable quick overview of the issues, the Hutchins Center at Brookings has published \”What’s (Not) Up With Inflation? (January 2020). The report starts with a short essay by Janet Yellen laying out the issues, and then is followed with an essay by Sage Belz and David Wessel listing possible explanations why inflation has seemed stuck in place (with references and links to recent supporting literature). Here are some possibilities for why inflation is stuck:
Inflation expectations. \”Professional forecasters and financial market analysts today generally believe price inflation will run at the Fed’s 2 percent target over the medium run. As a result, businesses may not respond as much as in the past to changes in economic conditions, anticipating whatever movements in inflation that might occur will dissipate quickly.\”
Monetary policy. The Federal Reserve has targeted an inflation rate of 2%, so inflation staying near that range just shows that monetary policy is working.
Changes in the labor market. Workers may be less likely to receive able wage increases when economic conditions are good. Some possible reasons are that the \”weakening power of unions in the private sector\” or that \”and increased global competition may have suppressed wage growth, reducing workers’ abilities to negotiate for higher wages.\”
Trade and global value chains. \”[D]omestic producers may be keeping prices low because they compete with foreign firms.\”
Technology and online competition. There is discussion sometimes of an \”Amazon effect,\” where increased online purchasing puts pressure on all sellers to keep prices low.
The inflation puzzle raises other questions, too.
Is it possible that inflation is not being well-measured in certain areas–like a rise in health care costs–so that it is actually more variable than the graph above suggests?
Is it possible that the past tradeoff between lower unemployment and higher inflation is less visible in national data, but still visible in state or metropolitan-area data?
Does the low inflation rate mean that the US economy can run large budget deficits or keep interest rates very low without fear of future inflation?
Can we count on inflation remaining low in the future, or could high inflation return with a rush?
Instead of focusing on inflation,should policymakers instead focus on the possible of future asset pricing boom-and-bust cycles, like the experiences with dot-com stocks in the late 1990s or housing markets in the early 2000s, and how problems in financial markets might disrupt the economy?
Inflation is one of the basic outcomes of macroeconomic models, along with economic growth and labor market outcomes like unemployment rates. It is thus a little jarring to watch a recent Fed chair like Janet Yellen forthrightly accept that we don\’t really understand what has been driving inflation these last 25 years. But the admission is an honest one, and an invitation to consider the puzzle further.
How does the market for coffee take raw coffee beans at $3.24 per pound and turn them into coffee retailing at $2.80 for a cup? This chart produced by the Specialty Coffee Association (in November 2019) tells the bare bones of the story. I leave the commentary to you.
Discussions of early childhood interventions often work backward from kindergarten–with a focus on providing preschool programs. Whatever the merits of such programs (and I\’ve described some weaknesses of the evidence supporting them here and here), they arrive too late for many at-risk children. For example, some evidence suggest that the cognitive gap for black children as a group opens up somewhere between ages 1 and 2–that is, well before a pre-K program starts.
The Spring 2019 issue of Future of Children is devoted to a seven-paper symposium, plus and introduction, on the theme of \”Universal Approaches to Promoting Healthy Development.\” The overall theme is to explore programs that start with home visits for parents of new babies. The idea is that such home visits can help link new parents to other community and health care resources, in a way that helps improve child development (and reduces child mistreatment).
It\’s easy to hypothesize about reasons why universal visit for new parents might work really well, or on the other side might be unwanted or too costly or impractical. Thus, the issue focuses on actual programs of this type and evidence on how they have worked. Here are a few examples, as summarized in the introductory essay by Deborah Daro, Kenneth A. Dodge, and Ron Haskins.
One example involves a program called Family Connects, which is described in this issue in a paper called “Universal Reach at Birth: Family Connects,” by Kenneth A. Dodge and W. Benjamin Goodman. As summarized by Daro, Dodge, and Haskins:
In this article, Dodge and Goodman report the results of three studies using the Family Connects model that illustrate its feasibility and show the strengths it could bring to broader implementation. The first trial encompassed nearly 5,000 children born in two hospitals in Durham, NC, between July 1, 2009, and December 31, 2010. Half the babies and their families were randomly assigned to an experimental group, the other half to a control group. …
The Family Connects program consists of three pillars: home visiting, community services, and data and monitoring. During the home visiting portion, a discussion took place between a parent, usually the mother, and a program nurse. The interview was conducted in the family home during the first few weeks of the child’s life and lasted from 90 to 120 minutes. The visiting nurse assessed family risk in 12 domains, and then the mother and nurse developed a plan to promote the child’s development and wellbeing. Where necessary, and when agreed to by the mother, the nurse arranged visits to community agencies. Birth records were used to record family needs and services received.
The results of the intervention are encouraging. First, while 94 percent of the families had at least one need that merited intervention, most were minor or moderate. Only 1 percent of the families required immediate intervention because of serious need; about half had serious to moderate needs that could be resolved by home visits, brief counseling, or other nonemergency services; and 44 percent had serious needs that required connection with community resources, such as treatment for substance abuse or depression, or intensive home visiting programs and other social services. Because Family Connects reaches the full population of birthing families in a community, it can reinforce targeted home-visiting programs by becoming a primary source of referral to them. In Durham, for example, Family Connects is the single most frequent source of referrals to Early Head Start and to Healthy Families Durham … One important finding was that a month after the nurse’s involvement ended, 79 percent of families said they’d followed through to make a community connection. Even more impressive, 99 percent of the families involved with Family Connects said they would recommend the program to other new mothers.
A longer-term follow-up was conducted when the children were six months old. In this study, when compared with control group mothers, those in the experimental group reported 16 percent more community connections; reported more positive parenting behaviors and higher-quality father-infant relationships; were nearly 30 percent less likely to show signs of clinical anxiety; and reported 35 percent fewer serious injuries or illnesses among their infants that required hospitalization. Throughout their first year of life, infants of experimental families had many fewer emergency medical episodes than did control babies. In addition to these positive findings, the Dodge team examined records of Child Protective Services over the children’s first five years. Their review showed that children in the program group received 39 percent fewer protective services investigations than children in the control group.
In another article, Christina Altmayer and Barbara Andrade DuBransky discuss the \”Welcome Baby \” program in an essay called “Strengthening Home Visiting: Partnership and Innovation in Los Angeles County.” Again, as summarized by Daro, Dodge, and Haskins:
The authors discuss how a universal offer of assistance establishes a foundation on which public and private agencies can plan meaningful systemic reform—and spark incentives for greater investments in services directed to vulnerable families. The vision builds on Welcome Baby, the county’s universal home visiting program funded by First 5 LA, which provides as many as nine contacts to pregnant women and new parents until a child’s ninth month. Three contacts occur before birth, one at bedside in the birthing hospital, and five afterward in the home. Piloted in one hospital in 2009, the program is now available to new parents delivering in 14 hospitals throughout the county. These facilities deliver more than a third of all births in the county, and almost 60 percent of births occurring in the county’s highest-risk communities. As of June 2018, the program had reached more than 59,000 families. …
One evaluation of the pilot program compared Welcome Baby participants to new parents in the same communities who didn’t access the program; it found favorable impacts on parental capacity, child development, and service utilization up to three years following program enrollment. A randomized trial of the program is currently being conducted to provide a more rigorous account of its effects.
Other examples include the First Born program. \”a targeted universal home visiting program that serves all first-time parents in several New Mexico communities,\” and discussion of the options opened up by the he Family First Prevention and Service Act, approved by Congress as part of the Bipartisan Budget Act of 2018.
The basic instinct behind these programs is similar to that behind pre-K programs like Head Start: that is, addressing child development issues earlier could potentially be more effective and less expensive than trying to address them later. However, it pushes that insight into looking at the period before and after birth, rather than waiting until pre-K arrives at ages 3 or 4. Indeed, some scholars in the field have even suggested that there could be a positive social payoff from redistributing some spending from standard pre-K programs to programs that affected parents and their even younger children.
The Gallup Poll regularly asks about what people see as the nation\’s most important problem. The share of people mentioning economic issues has been plummeting, from as high as 86% back in the Great Recession of 2009, down to 11-12% at present–the lowest level of concern over economic issues in the 21th century. This decline started during President Obama\’s second term, but has continued during the first three years of President Trump.