Is the Safety Margin of US Treasury Bonds Diminishing?

Traditionally, the US Treasury has been able to borrow with low interest rates, because US Treasury debt is viewed as ultra-safe and ultra-liquid. But the gap between the interest rate of US Treasury debt and corporate AAA-rated bonds has been diminishing. Indeed, Microsoft recently issued some long-term bonds that paid lower interest rates than Treasury debt–an almost unheard-of event.

Julian Kozlowski  and  Nicholas Sullivan of the Federal Reserve Bank of St. Louis, provide some evidence on the overall pattern in “Are U.S. Treasuries Still `Convenient’”? (“On the Economy Blog,” October 14, 2025). Here’s a figure showing the interest rate gap between the highest-AAA-rated corporate bonds and U.S. Treasury bonds. As the authors write: “The AAA-UST spread has been declining, reaching 30 basis points on Oct. 8, 2025. Notably, the average spread from 2012 to 2019 was 67 basis points, whereas in 2025, the average contracted by nearly half, to 36 basis points. This narrowing of the spread is significant, as AAA-rated corporate bonds have historically offered less liquidity and safety compared with Treasury bonds.”

As this interest rate gap has shrunk, a few corporate AAA bonds are even beginning to pay lower interest rates than US Treasury debt–even though companies are (pretty much by definition) more risky than the US government and the market for corporate bonds is less liquid than the market for Treasury debt. For exampl,e, James Mackintosh recently wrote in the Wall Street Journal (September 28, 2025), about “Why Microsoft Has Lower Borrowing Costs Than the U.S.” subtitled “There are several theories for why anyone would pay more for a bond from Microsoft or Johnson & Johnson than for a Treasury.

So what’s going on here? Part of the answer, unsurprisingly, seems to be a matter of supply and demand. Back in 2007, before the Great Recession of 2008-09, total federal debt held by the public was about $5 trillion. Now, after the extraordinarily large budget deficits of the Great Recession and the pandemic recession, total federal debt held by the public has more than quintupled to about $29 trillion. The supply of Treasury debt is very high, so the government doesn’t get quite as low an interest rate when it sells additional debt.

Conversely, many investors have been making a push to buy more corporate debt. In many cases, they purchase this corporate debt through an index fund that combines AAA bonds from many companies. Given that there is much less corporate debt outstanding compared to Treasury debt, the demand for this highest quality corporate debt is up–thus bidding down the interest rate that such borrowers need to pay. In addition, Mackintosh points out that holding certain kinds of corporate debt can be useful in the market for interest rate swaps–which offer a way of hedging against higher interest rates.

But with all the reasonable explanations for why the gap between between US Treasury debt and AAA-rated corporate debt has diminished–or even turned negative–I have a small nagging fear in the back of my head: Is this a warning signal that investors around the world are mulling over the possibility that US Treasury debt–in a situation of booming US borrowing–is not as safe or attractive an asset as they had previously assumed?

A Nobel for Innovation-Driven Economic Growth: Aghion, Howitt, and Mokyr

The Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel 2025 was awarded this morning for “for having explained innovation-driven economic growth.” The award was divided between Joel Mokyr ““for having identified the prerequisites for sustained growth through technological progress” to Philippe Aghion and Peter Howitt “for the theory of sustained growth through creative destruction.” The winners are fully deserving. But in this case, the Nobel committee overstates their work, in the sense that “innovation-driven economic growth” is not yet fully explained. As usual, the Nobel committee provides useful background information, including a highly readable “Popular Information” overview of the work, and then a longer and more technical “Scientific Background” essay.

Here, I’ll start with a few key points on the economics of technology, drawing in places on the Nobel committee background, and also offer a little more detail about the work itself.

Innovation and technological progress are of central importance.

From the “Popular Information,” here are two figures. The first one shows GDP per person, in inflation-adjusted dollars, for the 400 years from 1300-1700. There are some substantial innovations during this time: remember, the heart of the Renaissance period in Europe runs through the 1500s and 1600s. However, income per person doesn’t rise much–less than double over 400 years.

The accompanying figure shows GDP per capita from 1800 to the present. The innovations become more numerous. But especially striking is that over this period of 200+ years is that per capital income rises by a factor of about 8-10 during this time.

Whether you love this process of innovation and change, hate it, or have mixed feelings, it’s clearly an extraordinary event in history.

Technology needs to be broadly understood.

For economists, innovation and change need to be broadly understood. It’s not just about world-historical innovations like the steam engine, the generation of electric power, or information technology and now perhaps artificial intelligence. Innovation and technology also includes every small-scale change in how a worker gets the job done, every improvement to a piece of equipment, every new design, every new material.

Technology and innovation is a never-ending treadmill of unevenly distributed change–which means winners and losers.

The Nobel citation to Aghion and Howitt uses the phrase “creative destruction,” coined by Joseph Schumpeter in his 1954 book, Capitalism, Socialism, and Democracy. Schumpeter wrote:

Capitalism, then, is a form or method of form of economic change that not only is but can never be stationary. … The fundamental impulse that sets and keeps the capitalist engine in motion comes from the new consumers’ goods, the new methods of production or transportation, the new markets, the new forms of industrial organization that capitalist enterprise creates. As we have seen, the laborer’s budget from, say, 1760 to 1940, did not simply grow on unchanging lines but they underwent a process of qualitative change. Similarly, the history of the productive apparatus of a typical farm, from the beginning of the rationalization of crop rotation, plowing and fattening to the mechanized thing of today–linking up with elevators and railroads–is a history of revolutions. So is the history of the productive apparatus of the iron and steel industry from the charcoal furnace to our type of furnace, or the history of the apparatus of power production from the overshot water wheel to the modern power plant, or the history of transportation from the mailcoach to the airplane. The opening up of new markets, foreign or domestic, and the organizational development from the craft shop and factory to such concerns as U.S. Steel illustrate the same process of industrial mutation–if I may use that biological term–that incessantly revolutionizes the economic structure from within, incessantly destroying the old one, incessantly creating a new one. This process of Creative Destruction is the essential fact about capitalism.

Everyone wants the benefits of innovation and new technology, no one wants the costs. As Paul Romer (Nobel 2018) once memorably wrote, “Everybody wants progress; nobody wants change.”

The formula for innovation and economic growth is not yet well-understood.

The Nobel laureates have surely heloped to point to way to a deeper understanding of the path from innovation and technology to economic growth. For example, Mokyr has discussed and described in comprehensive fashion the historical process of growth, the central role of knowledge, and the importance of societies that are open to change. The “Popular Information” notes:

Through his research in economic history, Joel Mokyr has demonstrated that a continual flow of useful knowledge is necessary. This useful knowledge has two parts: the first is what Mokyr refers to as propositional knowledge, a systematic description of regularities in the natural world that demonstrate why something works; the second is prescriptive knowledge, such as practical instructions, drawings or recipes that describe what is necessary for something to work. … Another factor that Mokyr claims is necessary for sustained growth is that society is open to change. Growth based upon technological change not only creates winners, it also creates losers. New inven­tions replace old technologies and can destroy existing structures and ways of working. He also showed that this is why new technology is often met with resistance from established interest groups who feel their privileges are threatened.

For their part, Aghion and Howitt emphasized the interaction of research and development with creative destruction (in particular, the Nobel committee emphasizes their 1992 paper: P. Aghion, and P. Howitt, “A model of growth through creative destruction.” Econometrica 60(2), 323–351). They emphasize that for a firm, the payoff of research and development can come both from opening up new markets and from taking business now-obsolete products from existing firms. They also point out that society might underinvest in research and development, say f the private benefits to researchers are less than the social benefits, including the benefits of additional future innovations built on this progress. However, in certain situations could also could conceivably overinvest in R&D: say, if many firms are spending in the hope of achieving a certain innovation, but when only one firm wins, then the spending by other firms will not have achieved its goal. At the end of the dot-com boom of the 1990s, there was a dot-com bust, and the same could plausibly happen with the current AI boom.

But these fundamental insights leave a wide array of practical questions not fully addressed. For example, what is the role of various fundamental factors (in no particular order, including: a sufficient savings rate to support investment; a well-educated and high-skilled population; protecting public health; reasonable taxes; well-functioning transportation and communications infrastucture; support for the unemployed and the poor; rules to limit pollution; avoidance of inflation; support of property rights; and others. Does government have a role to play in paying for research directly, or should it focus on subsidizing research at universities and private firms? Does government have a role to play in subsidizing firms directly, or protecting domestic firms from international competition?

Even in the modern US economy, there is sometimes discussion of the “valley of death” that lies between between scientific discoveries and translating those discoveries into new products and methods of production. For economies around the the world not at or near the technological frontier, at least in certain industries, how do they move entire societies toward a functioning and applicable awareness of Mokyr’s propositional and prescriptive knowledge? What set of policies, subject to political constraints, should high-income countries undertake if they wish to increase their annual rate of growth by even 0.5% per year on average? History and economic theory offer hints about such questions of innovation and economic growth. But looking around the world, it is self-evidently true that such hints are often and in many places insufficient.

Some Economics of US Biofuels

Twenty years ago, with the Energy Policy Act of 2005, the US decided to encourage “biofuels” in a big way with the Renewable Fuel Standard (RFS) program, which started with RFS1 and later was amended to RFS2. Gabriel E. Lade and Aaron Smith provide an overview of what has happened since then in Biofuels: Past, Present, and Future” (Annual Review of Resource Economics, 2025, 17: 105-125).

Probably the main goal of encouraging biofuels was to reduce the extent of US energy dependence on imported oil. However, another important goal was the idea that if fuel could come from crops, and the the carbon dioxide emitted from burning the fuel could be absorbed into next year’s crops, biofuels could be part of reducing overall carbon emissions. The general plan back in 2005 was that biofuels might begin with using corn to make ethanol, which could be added to gasoline. However, the hope was to move toward producing biofuels not from corn–which has a number of other uses–and toward producing ethanol from nonedible parts of plants that were currently being thrown away, or from nonedible plants that could grow with relatively little cultivation, like switchgrass. However, the technology for converting plants other than corn to biofuels, at a competitive price, has been slow to develop. Lade and Smith write:

Before 2005, biofuels comprised little of the country’s energy consumption; ethanol made up less than 3% of gasoline, and biomass-based diesel use was negligible. In 2024, each biofuel comprises more than 10% of the corresponding fuel pool. … Nearly every gallon of gasoline sold in the United States contains 10% ethanol. However, all ethanol is made from corn; it is not made from the nonedible portion of plants grown on marginal lands …

Thus, there are several types of biofuels currently in play: the conventional corn-based ethanol that is added to gasoline; cellulosic ethanol that is now made from natural gas derived from offbeat sources (as discussed in a moment);biomass-based biodiesel fuel that is based on vegetable oils like soybean and canola oil, along with animal fats and recycled cooking oil.

The shift to “cellulosic” biofuel from non-corn plant sources hasn’t gone well, but the push for biofuels led to some other sources of these fuels. Lade and Smith write:

Cellulosic ethanol production never materialized because no company could produce it profitably at a large scale. A handful of commercial-scale liquid cellulosic ethanol plants opened in the United States between 2014 and 2020 (US EIA 2014). … Production peaked from 0.5 mgal/year in 2014 to between 8 and 10 mgal/year from 2017 to 2019. Production declined after 2019 as the few major operational commercial-scale cellulosic ethanol plants closed … By 2022, the RFS2 envisioned 16 bgal of cellulosic biofuel. That year, liquid cellulosic ethanol production was 1.4 mgal, less than 0.1% of the envisioned volumes.

Instead of liquid cellulosic biofuels, the cellulosic component of the RFS2 mandate has been either waived or partially met by liquid natural gas (LNG) or compressed natural gas (CNG) production. … These gases are produced from captured gas from landfills, municipal wastewater treatment facilities, and agricultural livestock manure (CRS 2023), sources not envisioned as compliance fuels when the RFS2 was originally passed.

Lade and Smith review the evidence biofuel controversies that have continued over time. For example, increased demand for corn and soybeans as a result of the biofuel mandate has driven up the prices over time, which is good for farmers of those products but not for consumers. The biofuel mandate causes farmers to plant more land in corn and soybeans than they would otherwise do, and plowing additional land releases carbon dioxide that had been stored in plant and root systems–thus offsetting part of the environmental gain from biofuels.

The current status of biofuel subsidies and requirements is unclear. Those who favor them would like additional mandates to mix more ethanol into gasoline and to require more use of biodiesel–even to figure out a way to have a biofuel that would work for aircraft. Opponents point to the costs related to higher prices and land use. The ultimate goal would be to find a low-carbon source for biofuels: for example, there is research ongoing into types of algae.

Lave and Smith are writing an academic review, not a diatribe, but at least in my reading, the bottom line for biofuels is not at present very encouraging. They write:

The Energy Policy Act and subsequent RFS2 mandates … transformed the US agricultural sector and gasoline markets; around one-third of all corn produced in the United States is used for fuel production, and nearly every gallon of gasoline sold in the United States contains at least 10% corn ethanol. … Although biomass-based diesel markets are much smaller than ethanol markets by comparison, their growth, paired with the recent surge in renewable diesel, means these fuels now demand more than 40% of the soybean oil produced in the United States.

The past and future benefits of these policies beyond increasing crop demand are less clear. Expanding cropland generates substantial carbon emissions that can offset the gains from burning less fossil fuel. Low-carbon biofuels are limited by high cost and the low supply of non-crop feedstocks such as animal fats and used cooking oil. Scaling production technologies for liquid cellulosic ethanol proved much more challenging and costly than thought in 2007. … In summary, while the fuels and focus are different than in 2007, the industry in 2025 has largely returned to past controversies about land use change, food prices, and the high cost of low-carbon feedstocks.

My own take is that the Renewable Fuels Mandate was an effort to accelerate technological change–specifically, to find a way to find a way for low-carbon plants (and some other sources) to replace fossil fuels. Two decades later, that hope of a great leap forward in this technology is largely unfulfilled. I’m the sort of optimist who would still support ongoing research in this area. But the main support for biofuels now comes from producers of corn and soybeans, who have become dependent on biofuels as the destination for a sizeable share of their output.

When George Dantzig Optimized His Personal Diet Problem

The “diet problem” has a particular meaning in academic literature. We know nutritional ingredients and calorie counts in various foods. We know the price of these foods to the consumer. We know what combination of vitamins, minerals, and calories are needed to maintain health. With this information, what is the least-expensive diet sufficient for human health?

A number of readers will recognize this as a classic problem in linear programming, which allows one to optimize (meaning to find a maximum or minimum value) in the presence of constraints. But I should emphasize that the “diet problem” has very real practical consequences. The original basis for the creation of an official US poverty line, back in the 1960s, was based on the work of Mollie Orshansky, who had worked for some years collecting data about family budgets and costs of preparing low- and moderate-cost meals. Orshansky suggested that since the average family in the early 1960s spent one-third of its income on food, then a plausible poverty line was the cost of a low-cost but nutritionally adequate diet multiplied by three. In this way, the US poverty line was designed to vary by the number of people in a household. In looking at malnourishment and standards of living around the world, the question of what would be included in the minimum-cost but adequate human diet is of great practical relevance.

George Dantzig (1914-2005) is sometimes called the “father of linear programming” for his development in 1947 of “the simplex algorithm” to solve these problems. In a 1990 essay, “The Diet Problem,” Dantzig tells the story of coming to grips with the diet problem intellectually– and also what happened when he tried to apply it to his own diet (Interfaces: The Practice of Mathematical Programming, July-August 1990, 20: 4, pp. 43-47).

As Dantzig tells the story, he had been working for the Air Force during World War II. The question of how to optimize subject to constraints had potentially very wide wide applicability to thinking about planning, operations, and design tradeoffs. Dantzig had a theoretical answer by about 1946 or 1947, but he was looking for a real-world example on which he could test it: “Marvin Hoffenberg suggested we test it on Jerry Cornfield’s diet problem. Jerry said that he had worked on the problem several years earlier for the Army who wanted a low cost diet that would meet the nutritional needs of a GI soldier.” Cornfield had the data, but couldn’t figure how to solve for the low-cost diet.

Dantzig began asking around for who else had worked on the problem, and learned that George Stigler has published “The Cost of Subsistence” in 1945 (Journal of Farm Economics, May 1945, 27:2, pp. 303-314). Stigler started with a list of nine necessary ingredients for a healthy diet from a 1943 National Research Council report:

Stigler also had data on the retail prices of 77 foods from US Bureau of Labor Statistics. Thus, he set out to calculate the minimum cost of buying a nutritionally adequate diet based on these 77 foods.

Stigler was quite open about the shortcomings of his approach. On the issue of nutritional needs, he wrote: “In addition to calories, the body requires about thirteen minerals (some in very minute quantities), and perhaps half as many vitamins. Protein contains two dozen amino acids, of which almost half are necessary to human beings. The precise determination of our needs for these–and no doubt other yet undiscovered–nutrients lies far in the future. Nevertheless standards of dietary adequacy have been established, perhaps prematurely and certainly very tentatively.” Stigler’s list of 77 foods was limited. As he points out, “The BLS list is a short one, and it excludes almost all fresh fruits, nuts, many cheap vegetables rich in nutrients, and fresh fish. It is beyond question that with a fuller list the minimum cost of meeting the National Research Council’s allowances could be reduced, possibly by a substantial amount.” Also, Stigler points out that “[m]any nutritive values have not been established quantitatively” and that “[m]ost foods are not even approximately homogeneous”; for example, 100 grams of a “Ontario” apple variety has four times as much ascorbic acid as 100 grams of a “Jonathan” apple.

But research on many questions often proceeds in a spirit that thinking through how to address a problem with the information at hand is a useful exercise, even with the knowledge that one is likely to want to revisit the problem later with better information. Thus, Stigler took a shot at a plausible answer: “Thereafter the procedure is experimental because there does not appear to be any direct method of finding the minimum of a linear function subject to linear conditions. By making linear combinations of various commodities it is possible to construct a composite commodity which is superior in all respects to some survivor, and by this process the list of eligible commodities can be reduced at least to nine … “

Here is Stigler’s minimum cost annual diet for August 1939 and August 1944. The cost of evaporated milk and dried navy beans rises during World War II, so they are substituted out for other items in the 1944 diet.

Stigler points out that estimates of the lowest-cost adequate diets from professional nutritionists at this time were much higher than his estimate:

These low-cost diets of the professional dieticians thus cost about two or three times as much as a minimum cost diet. Why do these conventional diets cost so much? The answer is evident from their composition. The dieticians take account of the palatability of foods, variety of diet, prestige of various foods, and other cultural facets of consumption.

As Stigler point out, the pure minimum-cost diet is an abstraction. He is careful to note that “no one recommends these diets for anyone, let alone everyone.” He also points out that for just a slightly higher cost, it is possible to add a much greater amount of variety to the diet. However, economist Stigler also argues that, as a matter of analysis, it is useful to separate the minimum-cost diet based on physical needs from the the minimum-cost that includes palatability and “cultural facets.”

In George Dantzig’s 1990 essay, he remembers the Stigler essay as “quite remarkable.” The data that Stigler had used thus became a first test for Dantzig’s proposed simplex method. Dantzig writes:

In the fall of 1947, Jack Laderman of the Mathematical Tables Project of the National Bureau of Standards undertook as a test of the newly proposed simplex method the determination of a least cost adequate diet based on Stigler’s data. It was the first “large scale” computation in the field. The system consisted of 9 equations in 77 unknowns. Jack parcelled out a different 8 or 9 columns (of the 77 columns) to each of the nine clerks who were assigned to process them. Using hand-operated desk calculators (this was in the days before computers), the nine clerks took approximately 120 man days to obtain an optimal solution of $39.69. Stigler’s heuristic solution was only off from the true annual optimal cost by 24 cents: not bad!

The clerks recorded their work for each iteration on a separate work sheet. Later, after the job was complete, Jack joined the separate sheets together to form one large sheet which we dubbed the Table Cloth. I have a letter in my files from Oskar Morgenstern (who with von Neumann wrote the famous treatise on game theory) saying he would like to come down from Princeton to Washington to view it. Eventually the table cloth disappeared, never to be seen again.

But the story doesn’t end there. In the early 1950s, Dantzig and his wife Anne moved to the RAND Corporation in California. Dantzig’s doctor told him that he needed to lose some weight. Thus, Dantzig decided that he would set up a programming problem not to have a least-cost nutritionally adequate diet, but instead to set up an under-1500 calorie, but makes-you-feel-full diet. He tells the story this way:

My doctor advised me to go on a diet to lose weight. I decided I would model my diet problem as a linear program and let the computer decide my diet. Some revisions of the earlier model, of course, would be necessary in order to give me a greater variety of foods to choose from; the calorie intake had to be reduced to under 1,500 calories per day; and the objective function had to be changed (I wasn’t interested in saving money). I said to myself: “The trouble with a diet is that one’s always hungry. What I need to do is maximize the feeling of feeling full.” After giving much thought to what the coefficients in the objective form should be, I used the weight (per unit amount) of a food minus the weight of its water content. Input data for over 500 different foods was punched into cards and fed into Rand’s IBM 701 computer. My colleague Ray Fulkerson (famous for his contributions to network flow and matroid theory) was skeptical. “You crazy or something? We solve models to obtain optimal schedules of activities for others to follow, not for ourselves.’ Nevertheless I was determined to do just that.

As I’m sure Dantzig expected, the linear programming as he had posed it to the computer did not include all the issues that mattered, and so the computer pumped out some peculiar dietary recommendations. Again, Dantzig tells the story:

One day I said to Anne, my wife, ‘Today is Der Tag, whatever the 701 says that’s what I want you to feed me each day starting with supper tonight.” Around 5:00 PM, Anne called, “Nu, it’s five and you haven’t called. What should I be cooking?” I replied that she didn’t really want to know. I then read off the amounts of foods in the optimal diet. Her reaction: “The diet is a bit weird but conceivable. Is that it?” “Not exactly,” I replied, “AND 500 gallons of vinegar.” She thought it funny and laughed.

I figured there had to be a mistake somewhere. It turned out that our data source listed vinegar as a very weak acid with water content = zero. Therefore, according to the way the model was formulated the more vinegar you drank the greater would be your feeling of feeling full. I decided that vinegar wasn’t a food.

The next day the above scene was repeated except this time I called Anne in time to prepare supper. Again the diet seemed to be plausable except for calling for the consumption of 200 bouillon cubes per day. Anne made one of the great puns of all time: “What are you trying to do, corner the bullion market?” The next day started with a test to see how many Bovril bouillon cubes I could consume for break fast. I decided to begin by mixing four in a cup of hot water. I had to spit it out: it was pure brine! I called my doctor and asked him how come the nutritional requirements didn’t show a limit on the amount of salt? “Isn’t too much salt dangerous?” He replied that it wasn’t necessary; most people had enough sense not to consume too much. I placed an upper bound of three on the number of bouillon cubes consumed per day. That was how upper bounds on variables in linear programming first began.

The next day the above scene was repeated, except this time the diet called, among other things, for two pounds of bran per day. Anne said, “If you consume that much bran, I doubt you’ll make it to the hospital. I’ll tell you what I will do (she was beginning to take charge): I’ll buy some finely milled bran and limit you to no more than a couple of cupfuls per day.” The model was revised with an upper bound put on the amount of bran. The next day the proposed menu was almost exactly the same except this time it was two pounds of blackstrap molasses which substituted for the bran; apparently their nutritional contents were quite similar.

At this point Anne got tired of the whole game. Speaking firmly so that I would know who was boss, she said, “I have been studying the various menus the computer has been generating. There are some good ideas there that I can use. I’ll put you on MY diet.” She did and I lost 22 pounds.

The deeper lesson, of course, is that computers (and now, artificial intelligence) will solve the problem as you have stated it to them. But if the problem is not stated fully–including for potential issues that just seemed so obvious that there was no need to state them–you may get ridiculous results. Even if the incorrect results are not as vivid as in Dantzig’s example, they can be deeply flawed in non-obvious ways. Thus, you need to keep judging and evaluating the feedback, perhaps by “adding upper bounds on variables.” But with such concerns duly noted, the results of such calculations can still be quite helpful in expanding one’s sense of the possible and suggesting alternatives that otherwise would not have even been considered.

Financial Inclusion

Imagine for a moment what it would be like not to have a bank account, or a credit card–much less, say, the ability to make online payments, get interest on savings, take out a loan, have a credit score, have direct deposit of a paycheck, or have a retirement account or a mortgage. Being part of the financial system is a basic step toward being connected to the broader economy. A team of Federal Reserve economists–Matteo Crosignani, Jonathan Kivell, Daniel Mangrum, Donald Morgan, Ambika Nair, Joelle Scally, and Wilbert van der Klaauw–provide an overview of the extent to which this situation arises in the US economy in “Financial Inclusion in the United States: Measurement, Determinants, and Recent Developments” (Economic Policy Review: Federal Reserve Bank of New York, September 2025, 31:3). They write:

The FDIC defines an individual as “unbanked” if no one in the household has a checking or savings account with a bank or credit union. … Among those with a bank account, the FDIC further defines as “underbanked” those individuals who are banked but underserved by existing saving, credit, and financial products. The latter is measured by use in the past twelve months of at least one alternative high-cost nonbank transaction or a credit product or service disproportionally used by the unbanked to meet their transaction and credit needs, such as money orders, check cashing, international remittances, rent-to-own services, pawnshop or payday loans, tax refund anticipation loans, and auto title loans. Underbanked individuals usually pay high fees for accessing their money and for transactions while having few opportunities to build savings and assets.

According to the latest 2021 FDIC National Survey of Unbanked and Underbanked Households, there are about 5.9 million households (15.6 million adults) that are unbanked, while 18.7 million households (51.1 million adults) are underbanked. … [T]hose who are unbanked accounted for 4.5 percent of U.S. households in 2021, with a further 14.1 percent of households being underbanked. Similar rates for the unbanked are found in three other surveys: 5.5 percent of households in the Board of Governors’ 2019 SCF, 4.7 percent of respondents in FINRA’s 2021 NFCS, and 6.5 percent of adults in the 2020 Atlanta Fed’s SCPC.

In survey data, what do the “unbanked” or “underbanked” say about the reasons why? The blue lines show the reasons given (multiple choices are possible); the yellow lines show the main reason (a respondent can choose only one).

Whatever the reasons given, being unbanked or underbanked imposes substantial costs, typically on low-income households already under financial stress from factors including low income, or recent loss of a job and/or health insurance. What might be done about it?

There have been several significant efforts to increase access to affordable bank accounts to unbanked and underbanked populations. Those efforts include allowing opening deposits under $25, providing access to free online and mobile services, and limiting overdraft fees. Access to a basic and affordable transaction account at an insured institution is seen as a key first step to financial inclusion, providing a safe place to save, conduct basic financial transactions, build a credit history, access credit on favorable terms, and achieve financial security. For example, to benefit from instant payment services through FedNow requires access to a bank account.

An important ongoing effort is the Cities for Financial Empowerment Fund’s Bank On accounts. Since 2008, the Bank On National Account Standards (2023-24) have helped financial institutions connect consumers to safe and affordable bank accounts, centered on safety, cost, and transactional ability. … Those accounts are offered by banks and credit unions representing over 60 percent of the U.S. consumer deposit market, according to the FDIC. More than half (53 percent) of all U.S. bank branches offer a Bank On certified account. The latest Bank On National Data Hub report by the Federal Reserve Bank of St. Louis finds that, to date, more than 14.1 million Bank On–certified accounts have been opened by consumers in 85 percent of U.S. zip codes. Further, in 2021 almost 80 percent of the accounts were opened by customers who were new to the financial institution, indicating that Bank On accounts are bringing new customers into financial institutions. Working through coalitions, Bank On continues to push more banks to adopt its 2023-24 national account standards and grant programs. In addition, Bank On pilot programs are focusing on helping high schoolers connect to the financial mainstream and helping unbanked individuals open bank accounts when they start a new job.

A more recent development, both hopeful and worrisome, is the ability to access banking-like services through online financial technology companies. On one side, it would seem churlish not to welcome some additional options for the unbanked. On the otherside, there is an ongoing concern that some of these options may require high fees and raise a risk of falling deeply into debt. One can imagine a role for an industry standard or a government certification in which fintech companies, if they wished to do so, could offer accounts guaranteed to have low costs in the style of Bank On.

I’ve been reading and writing about the unbanked for a long time now. I have no great solutions. But this feels to me like a problem where a combination of relatively low-cost outreach, pointing out reliable low-costs options, could have a substantial payoff in the daily lives of millions of lower-income Americans.

Save the Whales by Pricing the Ship Noise Externality

Ships play a central role in the global trade of goods, but they also make noise–which might affect undersea creatures like whales that depend on sound for communication and navigation. M. Scott Taylor investigates this issue in “Saving Killer Whales Without Sinking Trade: A market solution to noise pollution” (Property and Environment Research Center (PERC), September 25, 2025). Taylor (no relation) begins:

Maritime shipping is key to global trade, and international trade is key to prosperity worldwide. Today, approximately 80 percent of world trade by volume and over 60 percent by value is transported by ships. This trade brings new goods, technologies, and ideas from around the world and is critical to maintaining our standard of living and growing it into the future. Despite these benefits, shipping—like all economic activity—has environmental impacts.

Maritime shipping is responsible for perhaps 3 percent of global carbon emissions and a significant share of the world’s particulate and sulfur dioxide emissions. International organizations like the International Maritime Organization and regional governments like the E.U. have set new regulations and plans to reduce these impacts over the coming decades. There is, however, an impact of shipping on the environment that researchers have only recently recognized—underwater noise pollution—that may have deleterious effects on marine mammals. …

There is now widespread recognition that over the post-World War II period, maritime shipping has raised ambient noise levels in the world’s oceans. While estimates of the long-term change in ambient noise are somewhat speculative, one commonly cited source1 suggests ambient ocean noise has risen by three to four decibels (dB) per decade since the 1950s. This increase, tied to a simultaneous increase in maritime shipping, raised the ambient noise level in the low-frequency zone most relevant to marine mammals from approximately 52 dB in 1950 to more than 90 dB by 2007 …

Not surprisingly, marine biologists and ecologists have started to study the impact of rising ambient noise levels on marine mammals. The reason is simple: Sound to marine mammals is much like sight is to humans—it is their primary sense for moving through the world around them. Low-frequency sounds (less than several hundred Hz) may be interfering with whale communication and social calls, whereas higher-frequency sounds (greater than several hundred Hz) potentially interfere with the echolocation employed to track prey. Therefore, the sounds emitted by maritime shipping may affect almost all aspects of whale life, making it more difficult to communicate, socialize, and hunt.

Thus, two questions arise. First, is there some evidence that underwater sound affects whale populations in a negative way (and what would that evidence look like?). Second, if so, what might be done about it.

For evidence, Taylor offers an analysis of the “Southern Resident killer whales,” who spend most of their year living in what is called the Salish Sea, off the coast of Seattle in Washington state and Vancouver in British Columbia. This is “perhaps the most studied whale population in the world, with detailed health and genealogical information on births and deaths accurately measured since the late 1970s.” Taylor points out that as shipping into this area has increased, the whales living closer to the main ports have seen a decrease in population, while those living further north and away from the ports have seen an increase–suggestive, but one can imagine a variety of explanations.

Thus, Taylor zooms in on the variations in shipping across years, often caused by factors like the 2001 dot-com recession or the 2008-09 Great Recession. As it turns out: “For example, at age 40, a Southern Resident killer whale is over 30 percent more likely to die in a noisy year.” Also, “[i]n years of peak fertility, noisy years lower the probability of a subsequent successful birth by over 25 percent.” The broader question of how shipping noise affects whale populations around the world is clearly worthy of additional study, but for present purposes, Taylor advances to the second question of what might be done.

Of course, what economists call a “command-and-control” approach might just require every ship to install equipment to reduce noise. But as economists also like to point out, there might be an array of ways to reduce noise: along with quieter engines, perhaps a quieter propeller (or even just polishing the existing propellers), redesigning ducts that direct water to the propeller, perhaps travelling at different speeds (recognizing that a faster ship for less time might be preferable to a slower ship for a longer time), perhaps ships of different hull design or different sizes. Indeed, the best answer for minimizing the noise involved in shipping a certain amount of cargo may not be knowable in advance, because it would require research and experimentation over time.

Taylor proposes a way of measuring the underwater noise from ships, and suggests that marketable permits could be used. The idea of such permits is that a shipping company must have a permit for the underwater noise it emits. The original distribution of permits can be done by handing them out to existing shipowners, or by requiring that existing shipowners buy them (perhaps through an auction). Shipping companies that find ways to move cargo more quietly will have extra permits, which they can sell to other firms. At a minimum, such permits could prevent the amount of underwater noise from rising; in addition, the amount of noise allowed by a given permit can be preset to diminish over time.

The basic idea here is that, in this case, public policy should focus not on dictating a command-and-control solution, but instead to set a goal for how much underwater noise from shipping should be reduced, and then to give shipowning firms a financial incentive to innovate and experiment to meet and exceed that goal.

Mortgage Lock-In

The problem of “mortgage lock-in” arises when a homeowner has a mortgage that was obtained a few years back at a lower interest rate. If that homeowner wants to move to another place (often by using the equity in their current home for the down-payment), it would be necessary to obtain a new mortgage at a substantially higher interest rate. In this way, a move could lead to a substantial rise in monthly mortgage payments. Of course, when potential sellers are discouraged from selling, the supply of homes available in the market drops–and even the potential buyers who are willing to take out a mortgage at the higher interest will have to search harder or wait longer to find the home they desire.

Kyle Mangum of the Philadelphia Federal Reserve provides some background in an overview article, “How Mortgage Lock-In Affects the Price of Housing,” which is subtitled, “There has never been such a huge gap between the rate homeowners pay and the rate for new mortgages” (Economic Insights, 2025: Q3).

This figure shows interest rates for a 30-year fixed rate mortgage going back to the year 2000, along with the policy interest rate controlled by the Federal Reserve–the “federal funds rate.” As you can see, 30-year mortgages are above 8% back in 2000, and then gradually dropped through the pandemic in 2020-21.

At least to me, it’s interesting to note that the first two times the Fed raised interest rates since 2000, the 30-year fixed interest rate went up only a little. But the third and most recent time the Fed raised interest rates has been accompanied by a substantial rise in interest rates for a 30-year fixed rate mortgage. One obvious possible explanation for this greater movement is that when the Fed raised interest rates earlier, inflation had barely budged over time–so lenders were willing to lend at, say, 4% interest with an expectation that inflation would be close to 2%. But after the burst of inflation in 2022, lenders looking at a 30-year time horizon are not willing to assume that inflation will remain at around 2%, and the interest rate has risen accordingly.

The result is that lots of people who took out or refinanced a mortgage from, say, 2010 to 2021 are paying an interest rate well below the going rate for a new mortgage. Mangum offers this interesting figure. From about 2008 up through 2022, relatively few of those who had a mortgage had an interest rate that was more than 1 percent below the rate for a new mortgage. They were not “locked in.” But at present, something like 90% of those with mortgages are paying an interest rate that is at least 1 percentage point below the market rate for a new mortgage, so if they sold their current house and took out a new mortgage as part of buying a different house, their monthly payments could be substantially higher. As Mangum points out, the constricted housing supply from mortgage-holders who feel locked-in helps to explain why US housing markets have in recent years experienced both low sales volumes and high price growth.

To put this another way, mortgage-holders have in recent decade been able to take advance of lower interest rates to refinance their fixed-rate mortgages. But with the recent rise in interest rates, now most mortgage-holders are paying an interest rate below the market interest rate for a new mortgage, and so refinancing has dried up. Here’s a figure from Freddie Mac, showing the extent of mortgage refinancing since 1980. In particular, the lower mortgate interest rates in the early 2000s led to a surge of refinancing. But over time, mortgages were being taken out at lower interest rates, and although refinancing tended to surge when mortgage interest rates fell, the total amounts dropped lower than in the early 2000s. Since 2022, when interest rates went up, refinancing of mortgages has fallen to extremely low levels.

Is there an answer to the locked-in problem? As Mangum points out, the options within the current framework of US mortgages are limited. If the Federal Reserve reduced its federal funds target rate substantially, it would bring down mortgage interest rates as well, but a return to the days of a federal funds interest rate at nearly zero percent seem unlikely. Also, my guess is that the fear of future inflation will tend to keep 30-year fixed rate mortgages high. If Americans liked adjustable-rate mortgages, then lock-in (and refinancing) would not be such big issues–but Americans as a group have shown a preference for fixed mortgage payments.

What about if US mortgage institutions could change? For example, what if mortgates could be “portable,” so that you keep your earlier mortgage when you move to another home? However, a “portable” mortgage would have a different interest rate than a “nonportable” mortgage–because it might end up applying to different houses and on average might be held for different periods of time–and so trying to just pass a law to make all mortgages portable may not be practical.

Another approach comes from Denmark. The idea is that some of the investors who in the past purchased housing-backed securities that involved low mortgage rates would like to ditch those investments when interest rates go up. In Denmark, it’s possible for a mortgage-holder in this situation to buy back their own mortgage at a significant discount–thus letting the mortgage-holder benefit in this way when interest rates rise. If someone who took out a mortgage at a low interest rate could pay off the mortgage at a discount, they might become more willing to sell–and in this way to make the housing market more flexible.

But setting aside these types of more fundamental reforms, the locked-in problem is more likely just to resolve slowly over time. As Mangum writes:

[T]he unwinding of lock-in will likely come about through normal housing market turnover—that is, through changes in family status, jobs, health, and so on. Thanks to this turnover, most new and existing mortgages will eventually converge to the market rate. But this unwinding will take time to run its course—and extra time because the rate at which people move is dampened by lock-in …

Do Rules to Limit High Government Debt Work?

Every government faces a temptation to make two popular choices at the same time: hold taxes lower and raise spending higher. But of course, the result is higher debt. Thus, a number of countries have been attempting to set up rules that would prevent governments from giving into temptation. One immediate challenge for such rules is that they need to contain some flexibility: after all, tying the hands of govenment during a pandemic or a deep recession doesn’t seem sensible. A second challenge is governments today would prefer not to follow the rule passed by government yesterday.

A group of IMF economist describe these issues after studying a databased of fiscal rules in more than 120 countries in “Fiscal Guardrails against High Debt and Looming Spending Pressures” (IMF Staff Discussion Note SDN/2025/004, September 2025, by Julien Acalin, Virginia Alonso-Albarran, Clara Arroyo, Waikei Raphael Lam, Leonardo Martinez, Anh D. M. Nguyen, Francisco Roch, Galen Sher, and Alexandra Solovyeva).

Here’s the rise in countries with a fiscal rule of some kind since 1990. The upward trend started in advanced economies, but has since been spreading.

The IMF authors describe how the fiscal rules are working along these lines:

Although earlier fiscal rules were often too rigid, efforts to introduce greater flexibility have not translated into stronger compliance. … [F]ewer than two-thirds of countries adhere to their deficit rules on average, with lower share for emerging market and developing countries and debt rules. … Fiscal deficits four years after the pandemic continue to exceed fiscal rule limits by a median of 2.0–2.5 percentage points of GDP for about 40 percent of advanced economies and 60 percent of EMDEs (Alonso and others, 2025b). In most countries, public debt has surpassed the ceilings in the debt rule by an average of 25 percentage points of GDP. Such large deviations from fiscal rule limits in many countries are driven by both severe shocks and limitations in the design of fiscal rules (Davoodi and others 2022a). During the severe shocks, the magnitudes and the share of countries that deviate from fiscal rule limits increased as expenditures or deficits tend to rise. But even in normal times, some countries have deficits and debt persistently exceeding their fiscal rule limits, partly because of multiple exclusions from the rules, limited fiscal oversight, or lack of fiscal adjustments to reduce debt and deficits. In recent years, fiscal adjustments have been limited, complicating the return to fiscal rule limits (Caselli and others 2022).

Choosing a fiscal rule is easy enough: for example, a government can specify thar it will balance its budget annually, or that total government debt will not surpass a certain debt/GDP ratio, along with other approaches. The rule can also offer some flexibility for pandemic and recessions. The hard part is what to do when the government is either thinking about blowing right through the rule, or has already done so. To address the harder part, the IMF team describes two useful elements of a fiscal rule.

First, any agreement on a fiscal rule should also be an agreement on what corrective action will be taken if the rule is bent or broken. For example:

Ecuador and Spain mandate corrective actions when fiscal outcomes are close to the fiscal rule limits. Some countries implement progressive triggers with corresponding tighter measures. For example, Czech Republic sets thresholds on the debt-to-GDP ratios, each involving larger fiscal adjustments if triggered. … In the event of deviations, many fiscal rules require corrective actions to be implemented within one and a half or two years (Finland, Spain) after the breach, and sometimes within three years (Grenada). More stringent correction mechanisms may require remedial action to be included in the next budget. … Some fiscal rules … call for fully unwinding past cumulative deviations in the corrective mechanism. For example, Switzerland’s mechanism accumulates any deviation from the budgeted expenditures in a notional account, requiring the government to take sufficient measures to bring the expenditures within the limit in next three annual budgets if the negative balance in the account exceeds 6 percent of expenditure. Mechanisms in Germany, Grenada, and Jamaica require corrective actions for cumulative deviations. … Some countries specify particular measures; for instance, the fiscal rule in Slovak Republic mandates a freeze on public sector wages if debt exceeds 53 percent of GDP, with further spending cuts if debt surpasses 55 percent of GDP.

In short, if your proposed fiscal rule doesn’t specify what consequences will result from breaking it, and the timing for those consequences, it isn’t much of a rule. the other main element of a successful fiscal rule is that, given that the current government is either breaking the rule or on the verge of doing so, it’s important to have some institutions that can create pressure from outside the government.

For example, many countries publish a “medium-term fiscal framework,” or MMTF, which seeks to read broad agreement on budgetary decisions before getting down into the details. The IMF economists write:

The MTFF sets top-down limits on government expenditure and fiscal balance, guiding the annual budget process. The MTFF report should include the fiscal strategy, medium-term macro fiscal projections, measures for achieving fiscal targets, and fiscal risks assessment (Curristine and others 2024). The MTFF should be prepared and published before the budget, incorporating multiyear ceilings for fiscal aggregates, which can also be disaggregated into sector-specific or programmatic frameworks (for example, France, Rwanda, South Africa, Sweden) to facilitate the translation of targets into annual budgets and spending priorities.

I confess that in the context of the US federal budget, which seems to be run by a combination of continuing resolutions punctuated by an occasional omnibus bill that loads everything together, the idea of agreement on an MMTF seems very hard. But a number of countries do manage it. Another form of outside pressure is to have a “fiscal council” with a degree of operational independence, which has the power to point out and publicize whether the fiscal rules are under threat.

Fiscal oversight can take different institutional forms, ranging from parliamentary budget committees and auditor offices to independent fiscal councils. Fiscal councils can provide technical assessments of compliance with fiscal rules and can alert in year deviations. Their expertise is critical for evaluating risks to public finances and the realism of macro-fiscal forecasts in the budget and MTFF. Fiscal councils should have direct communication with the media. To secure their operational independence, they should have a well-defined mandate aligned with their resources, budget safeguards, and timely access to information …

In a US context, the idea of a fiscal council also doesn’t seem very realistic. The US government decided back in the 1930s that the Federal Reserve would have its goals set by laws duly passed by Congress and signed by the President, but would then be operationally independent from politics. But that arrangement is now under political challenge, and an independent fiscal council responsible for fiscal strategy and targets seems even less politicall plausible.

Again, fiscal rules are easy. The hard part is specifying what corrective actions will happen if the rules aren’t followed, and what credible institutions will advocate for the rules and the corrective actions when needed.

Snapshots of the Global Robot Population

The International Federation of Robotics is a nonprofit but industry-based trade group. Each year the IFR issue a World Robotics Report, which costs way too much for me to get a copy. However, the report is accompanied by a useful press release with slides showing big-picture trends in the spread of robots around the world. Here are a few points that jumped out at me.

Here’s a figure showing the tripling in the total stock of industrial robots in the last decade, now reaching 4.6 million:

About half of those industrial robots are in China, with Japan and Korea also in the top five. The number of manufacturing jobs in China has been declining for several years now.

What are some of the shifting patterns in global robotics?

1) For a number of years now, the use of industrial robots has been primarily in two industries: electronics and cars. While those are still the two biggest users of industrial robots, they now account for less than half of the market and the “other industries” category is on the rise.

2) The IFR divides robots into two categories: the industrial robots just mentioned, but also a rising category of “service, mobile, and medical” robots. This includes, for example, robots that can autonomously drive around in warehouses and even pick items off shelves, robots for professional cleaning, search and rescue robots, and robots that can conduct laboratory tests or even assist with surgery.

3) Humanoid robots are not really a thing yet. Such robots are still in the R&D and prototype stage. As far as I can tell, the underlying issue is that robots are usually designed to carry out particular tasks, and when you do that, the best design for a specific task is usually not shaped like the human body.

Measuring Benefits of High-Skilled Immigration

How can economists measure the benefits of high-skilled immigration? The challenge is to use real-world data to separate this immigration from other factors, recognizing that some anecdotes about particular high-skill immigrants doesn’t offer real evidence, and that corellation is not causation. Economists often tackle questions like this by looking for a “natural experiment”–that is, some kind of event or policy that created a shock of more (or less) high-skill immigration. Michael A. Clemons describes some of this evidence in his useful short essay, “New US curb on high-skill immigrant workers ignores evidence of its likely harms” (Peterson Institute for International Economics,” September 22, 2025).

For example, consider the H-1B visa, which allows a US employer to hire a foreign professional–defined as someone who has a least a bachelor’s degree in a “specialty occupation” that typically involves advanced technology. The visa is typically for three years, extendable to six years. In 1998, Congress tripled the number of these H-1B visas. Then in 2004, Congress cut the number by more than half. Set aside for the moment the issue of whether these policy choices made sense, and just look at it as a research opportunity.

When Congress tripled and then halved the number of H-1B visas, the effects were not evenly distributed across US cities. Some cities saw much bigger increases and declines in H-1B visa-holders than others. Thus, one can compare urban areas that were similar in these techology industries before 1998, and then see what happened when some of these cities received an influx of talent while others did not.

In addition, more companies would like to hire through the H-1B visa program than the number of actual visas available, so the visas are actually allocated across firms by lottery. Again, think of this as a research opportunity. A researcher can compare those companies that by random chance won the lottery and were allowed to hire additional skilled labor to those companies that were not.

In short, the results of such studies are not theoretical claims, but instead are real-world results based on fairly recent US experience. Clemons describes what the studies show:

That’s how we know that workers on H-1B visas cause dynamism and opportunity for natives. They cause more patenting of new inventions, ideas that create new products and even new industries. They cause entrepreneurs to found more (and more successful) high-growth startup firms. The resulting productivity growth causes more higher-paying jobs for native workers, both with and without a college education, across all sectors. American firms able to hire more H-1B workers grow more, generating far more jobs inside and outside the firm than the foreign workers take.

An important, rigorous new study found the firms that win a government lottery allowing them to hire H-1B workers produce 27 percent more than otherwise-identical firms that don’t win, employing more immigrants but no fewer US natives—thus expanding the economy outside their own walls. So, when an influx of H-1B workers raised a US city’s share of foreign tech workers by 1 percentage point during 1990–2010, that caused 7 percent to 8 percent higher wages for college-educated workers and 3 percent to 4 percent higher wages for workers without any college education.

The key point is that in high-tech growth industries, the number and size of firms and the number of jobs is not static. An increase in the number of high-skilled immigrant workers raises the number of jobs and wages for native-born workers across a range of skill levels. Openness to innovators and innovation is a key driver for a rising US standard of living.

I’ll just add that the H-1B visa program is undoubtedly imperfect, like most real-world policies. The receiver of the visa is effectively tied to the employer for a period of time, which creates a potential for abuse. There are sure to be some native-born high-skill workers who look at the influx of immigrant high-skilled workers and worry that it will negatively affect their job prospects or wages. Economic growth is disruptive. Economic stagnation will often appear less disruptive–until people all over the economy recognize that in a zero-growth or low-growth economy, the only way to get ahead is for someone else to have less. As Paul Romer has said: “Everyone wants progress. Nobody wants change.

Hat tip: I was directed to the Clemons article by Tyler Cowen in a post at the ever-useful “Marginal Revolution” website.