Medicaid: What It Has Become

As Craig Garthwaite and Timothy Layton point out: “Originally a small, inexpensive safety-net program, Medicaid has grown into a major national health-insurance provider, covering nearly one in four Americans and more people than the public health insurance programs of the United Kingdom, Germany, or France.” They review the program and offer some recommendations in “Coverage Isn’t Care: An Abundance Agenda for Medicaid” (forthcoming in Advancing America’s Prosperity, edited by Melissa S. Kearney and Luke Pardue, published by the Aspen Economic Strategy Group.

I would add that whether you favor government-run national health insurance or oppose it, Medicaid is a major example of such a program in actual operation, and thus worthy of your attention. A few facts:

  • Total Medicaid spending by federal and state governments was $880 billion in 2024. “Medicaid is jointly financed by state and federal tax dollars while being designed and administered by each state. This setup leads to remarkable variation in the program’s structure across the country. … The program’s growth in size and scale means that it now comprises a substantial fraction of state budgets, with the average state spending almost one-third of its budget on Medicaid …” Indeed, a certain number of proposed changes to Medicaid from federal-level politicians focus on reducing federal spending by shifting a greater share of Medicaid spending to states.
  • Medicaid ” has expanded gradually from a program of categorical eligibility, restricted to specific low-income groups (such as pregnant women or the disabled), to—with the passing of the Affordable Care Act (ACA)—a broad-based entitlement for nearly all low-income adults.” Medicaid covered about 20 million people during its first two decades, up through the 1980s, but a series of expansions since the 1990s than has roughly quadrupled Medicaid enrollment in the last three decades, reaching 78.5 million by December 2024.
  • “This growth has been coupled with a structural shift, with roughly 75 percent of beneficiaries now receiving care through private managed-care organizations rather than government-operated insurance programs. These firms include familiar names from other health insurance markets such as United, Aetna, Humana, and Centene, making the modern version of Medicaid quite different from the classic perception of a safety-net healthcare program run and operated by legions of government bureaucrats.”
  • “Medicaid bothpays for 41 percent of births in the US and is the largest single payer for long-term care services in the US. It is the nation’s only true cradle-to-grave insurer. The medical requirements of these many different types of beneficiaries are meaningfully different, and it is therefore likely that the optimal insurance design differs, perhaps greatly, across these groups. Despite this fact, the program largely takes a one-size-fits-all approach and attempts to provide a single comprehensive set of benefits to all enrollees.”
  • “Medicaid involves relatively little expenditure per enrollee. Medicaid accomplishes this feat by paying very low rates to all medical providers. This frugality does not come without meaningful consequences for enrollees. Many providers simply refuse to accept Medicaid enrollees. Others consider treating these patients as a form of charity care. For example, many hospitals declare `underpayments’ from Medicaid as part of their contribution to the public good. … Beyond payment rates, state Medicaid programs also often make it fairly difficult for providers to actually get paid. Data suggests that fee-for-service (FFS) Medicaid is the biggest denier of bills from providers, with a “denial rate 17.8 percentage points higher than fee-for-service Medicare” (Gottlieb et al. 2018). Medicaid managed care is the second-most likely to deny, denying just under 10 percent of bills and challenging around 13 percent. Both FFS an managed-care Medicaid also have much longer times to payment, making working with Medicaid a much bigger hassle for providers than working with Medicare or commercial insurers.”

This last point is a central focus of the proposals offered by Garthwaite and Layton. As they say in their title, being covered by Medicaid is not the same as receiving actual health care through that coverage. On the subject of Medicaid reform, they write:

The current [Medicaid] program is defined by a stark economic tension—it promises access to the mainstream medical system while only providing the funding that can support a two-tiered one. This contradiction was manageable when Medicaid was a small program, but now that it covers a quarter of Americans, there is potential for an access crisis. Policymakers must therefore confront a fundamental choice: Continue to chase the mirage of equal access, or build a system that delivers abundant care to all Medicaid beneficiaries within its budget. We argue for the latter. An honest assessment reveals that an implicit—and dysfunctional—two-tiered system is already the reality. …

This effort should begin by explicitly acknowledging the existence of an implicit two-tiered system whereby Medicaid beneficiaries have coverage but lack access to high-quality medical care. Productive reforms should focus on a redesigned program that fosters an abumdant supply of providers of basic care for the Medicaid tier. Our proposal focuses on targeted regulatory relief and the integration of new artificial-intelligence technologies (AI) to create lower-cost, sustainable business models for providers who primarily serve Medicaid patients, with the goal of ensuring abundant access to basic care. While some might argue that these types of reforms provide a lower standard of care for low-income Americans and confine them to lower-quality healthcare services, we emphasize that the goal is not to diminish the quality of care received by Medicaid enrollees. Instead, our proposals aim to help the large number of Medicaid patients who currently have access to no care (or very limited care) under the current system to have easy and abundant access to (at least) basic healthcare services.

In that spirit, Garthwaite and Layton argue for allowing the immigration of additional internationally-trained health care providers to serve Medicaid patients, allowing intermediate-level health care practitioners like nurse practitioners and physician assistants to have greater autonomy in providing certain kinds of care, and to develop methods for AI-augmented care. They write: “For a beneficiary whose alternative is no access to care, the use of a new, well-designed technology is a clear improvement.” Frankly, I’d be happy to see these kind of reforms implemented across the entire US health care system. But using them in Medicaid would at least be a start.

The US Financial Services Industry

It’s a standard pattern that as an economy develops, it’s financial sector becomes a larger share of GDP. After, banks and bond and stock markets have less of a presence in low-income countries. But in addition, the shape of the financial services industry changes over time. Robin Greenwood, Robert Ialenti, and David Scharfstein explore these shifts and others in “The Evolution of Financial Services in the United States” (Annual Review of Financial Economics, 2025, 17: 189-206). The abstract says:

This article surveys the literature on the historical growth and transformation of the US financial sector. The sector expanded rapidly between 1980 and 2006, during which its contribution to GDP rose from 4.8% to 7.6%. After the global financial crisis, the size of the sector stabilized at approximately 7% of GDP. After reviewing this literature, we examine recent developments, including the continued growth of high-fee alternative asset management and the shift away from banks to lending by nonbank financial intermediaries. We interpret both the growth and recent evolution of the sector as reflecting a continued transition to a more market-based financial system, with risk migrating away from banks and into markets.

To illustrate, here’s a figure showing the US financial sector as a share of GDP going back to 1950, using a bunch of different measures. NIPA stands for “national income and product accounts,” which is how GDP is calculated. The references to “compensation” are because, for an economist, the output of a sector can be measured by how much compensation is paid to that sector. You can see the long rise leading up the Great Depression, another long rise leading up to the 1990s, and then a levelling out since the Great Recession

Much of the paper focuses on how the internal structure of the US financial sector has shifted since about 2006, although the total sector has remained much the same. There are two interrelated changes here. The authors write:

First, the growth of alternative investing (hedge funds, private equity) has continued to drive income in the securities and asset management subsector, with the distribution of fees becoming even more of a barbell—high fees for alternative investing and very low fees for traditional asset management. Second, there has been a notable shift in credit intermediation away from commercial banks. We argue that these two developments are likely connected. On the demand side, continued growth of pension funds has fueled demand for high-fee alternative investments, including private credit funds, as well as securities. On the supply side, post-GFC [global financial crisis] financial regulation has supported the development of the nonbank lending sector.

In short, it’s become easier and cheaper to invest using basic tools like a stock market fund. However, more specialized ways of investing like private equity funds or hedge funds have expanded and are still able to charge high fees. Moreover, banks have faced increasing regulatory restrictions since the Great Recession. Thus, many banks no longer make money by lending for home mortgages and business purposes, and using the interest received to pay expenses and some return to savers. Instead, more banks make money by charging fees, and when they do lend money, they often resell the loan to another financial sector player for collection. But as banks take on less risk, those who want to borrow are turning to alternative nonbank sources of finance. Businesses who want to borrow are more likely to do so through the bond market, or private credit funds and business development companies. Nonbanks owned 40% of home mortgages in 2007, but 65% of home mortgages by 2023.

A big player behind these shifts is large pension funds. The authors explain: “The growth of securities markets and nonbank credit comes alongside the growth of pension funds. And as banks have increased the allocation of their balance sheets to safer assets, pension funds and other investors like insurance companies have stepped in to hold risky credit assets. Thus, both a reduction in the demand for risky credit from banks and an increase in the demand for risky credit by institutional investors can help explain the growth of alternatives, securities, and nonbank credit.”

The broad pattern of the changes is that the investments that most people have in banks or stock market funds are getting easier, cheaper, and probably safer, but investments with a with a higher degree of risk are being sectioned off into hedge funds, private equity funds, corporate bonds, private credit finds, and others. Ultimately, it’s not clear to the authors (or to me) whether the evolving shape of the US financial system is better for supporting US economic growth, or less likely to melt down during a crisis.

How Much are US Firms Using AI Tools?

A question I’m getting asked a lot recently is whether all this fuss about AI is just a bubble. My standard answer is that two things are happening here, and it’s wise not to confuse them. One is the sky-high stock prices for companies closely involved in AI. The other is how much AI itself will actually end up mattering a great deal to the US economy.

I have no particular insight into the short- or medium-term dynamics of the stock market, much less individual stocks. But I wouldn’t be much surprised if stock prices for some of the currently leading AI companies drop substantially at some point in the next few years, nor would I be surprised if some AI companies I’ve never heard of see their stock price boom. It’s also perfectly possible that the stock price of some ;eading AI companies falls, but the reality of AI in the production of goods and services rise.

The stock market isn’t actually part of the real economy of goods and services. This is literally true: the stock market isn’t part of the gross domestic product, because when stock is bought and sold, there is just an exchange of an asset, but nothing is actually produced. For the same reason, building a new house is counted as part of GDP (that is, something is produced), but selling an existing house is not part of GDP (it’s just an exchange of an asset). There’s a saying that helps to clarify the separation between stock market prices and the real economy: “The dot-com stock market boom of the 1990s was a bubble, but the Internet was not a bubble.”

Of course, although the stock market and the real economy are not the same, there are feedbacks between them. If the stock market drops, I’ll see it in my retirement account, and the signal of a declining stock market will make it more expensive for firms to raise capital. Conversely, the stock price for AI companies will depend on the extent to which their products create value for firms and individuals. Here, I’ll focus on some recent survey evidence on how broadly AI tools are being adopted in corporate America–and for all the talk about AI, the adoption rate so far is lower than you might expect.

As one example, the US Census Bureau does a Business Trends and Outlook Survey based on responses from 1.2 million US businesses. Here are some results from the BTOS as updated on November 20, 2025. Here’s a figure showing businesses that are using AI tools: the blue line is those that have used such tools in the last two weeks: this has risen from 5% of firms at the start of 2024 to about 10% here near the end of 2025. The orange line shows firms that expect to use AI tools in the next six months. The lines are trending up, but not skyrocketing. A more detailed breakdown by industry shows higher levels of use in information industries, finance, and in professional/scientific/technical services, and near-zero levels in manufacturing and retail.

Here’s a breakdown of the Census data by size of firm. As you can see, the use of AI by the biggest firms (top line) seems to have levelled off or even dropped a bit in the last six months or so. The category of firms with the biggest ongoing rise in AI use are the small firms (lightest blue line) with 1-4 employees.

Here are some other survey results, these from the National Opinion Research Center at the University Chicago. Malihe Alikhani, Ben Harris, and Sanjay Patnaik report the results in “How are Americans using AI? Evidence from a nationwide survey” (Brookings Institution, November 25, 2025). Sample size here is about 1100 people, selected to be a nationally representative sample. They emphasize that those with higher education levels are more likely to be using AI tools.

This figure shows how AI tools are being used in the workplace, broken down by education level. The most common use is those with a BA degree using it for writing and editing documents. But many of these groups are reporting 10% or fewer people who use AI tools, “supported by your institution,” in the workplace. Again, the current use of these tools is lower than one might expect.

Indeed, the NORC survey asks some follow-up questions on whether workers feel that the AI tools are improving their productivity, and the results are decidedly mixed:

The impact of generative AI on worker productivity is often unclear, even to the workers themselves. Only 19% of all respondents report that AI increased their productivity in their daily tasks, and only 4% say it increased their productivity significantly. Even among respondents with a bachelor’s degree or more, just 28% say that AI increased their productivity in daily tasks. More than one in five respondents report that their daily productivity remained the same (22%) and over half of all respondents say they are either not sure about the effect of AI on their productivity or say it does not apply to them (53%).

Putting these kinds of results together, it’s perhaps no surprise to see a story in the Economist magazine (November 26 issue) headlined “Investors expect AI use to soar. That’s not happening” and subtitled “Recent surveys point to flatlining business adoption.” The article whimsically suggests that the GPT in ChatGPT might stand for “Generally Paused Technology.” Using the Census data above on the size of businesses using AI tools as well as other survey and research results, the article estimates that share of Americans using AI at work has recently declined slightly.

Of course, the uncertainties around how to design AI tools for business and personal use are very large at this stage. Sometimes when a remarkable new technology is being adopted, there is a period of several years where many people are learning about the technology and multiple application are being developed, but this doesn’t show up in the big-picture statistics. Then at some point, some of those applications gain traction with a broad group of users and the technology takes off. But at least so far, with the current abilities of AI and the AI applications currently available to US firms, that launching point hasn’t happened yet.

Origins of Thanksgiving

Thanksgiving is a day for a traditional menu, and part of my holiday is to reprint this annual column on the origins of the day.

The first presidential proclamation of Thanksgiving as a national holiday was issued by George Washington on October 3, 1789. But it was a one-time event. Individual states (especially those in New England) continued to issue Thanksgiving proclamations on various days in the decades to come. But it wasn’t until 1863 when a magazine editor named Sarah Josepha Hale, after 15 years of letter-writing, prompted Abraham Lincoln in 1863 to designate the last Thursday in November as a national holiday–a pattern which then continued into the future.

An original and thus hard-to-read version of George Washington’s Thanksgiving proclamation can be viewed through the Library of Congress website. The economist in me was intrigued to notice that some of the causes for giving of thanks included “the means we have of acquiring and diffusing useful knowledge … the encrease of science among them and us—and generally to grant unto all Mankind such a degree of temporal prosperity as he alone knows to be best.”

Also, the original Thankgiving proclamation was not without some controversy and dissent in the House of Representatives, as an example of unwanted and inappropriate federal government interventionism. As reported by the Papers of George Washington website at the University of Virginia.

The House was not unanimous in its determination to give thanks. Aedanus Burke of South Carolina objected that he “did not like this mimicking of European customs, where they made a mere mockery of thanksgivings.” Thomas Tudor Tucker “thought the House had no business to interfere in a matter which did not concern them. Why should the President direct the people to do what, perhaps, they have no mind to do? They may not be inclined to return thanks for a Constitution until they have experienced that it promotes their safety and happiness. We do not yet know but they may have reason to be dissatisfied with the effects it has already produced; but whether this be so or not, it is a business with which Congress have nothing to do; it is a religious matter, and, as such, is proscribed to us. If a day of thanksgiving must take place, let it be done by the authority of the several States.”

Here’s the transcript of George Washington’s Thanksgiving proclamation from the National Archives.

Thanksgiving Proclamation

By the President of the United States of America. a Proclamation.

Whereas it is the duty of all Nations to acknowledge the providence of Almighty God, to obey his will, to be grateful for his benefits, and humbly to implore his protection and favor—and whereas both Houses of Congress have by their joint Committee requested me “to recommend to the People of the United States a day of public thanksgiving and prayer to be observed by acknowledging with grateful hearts the many signal favors of Almighty God especially by affording them an opportunity peaceably to establish a form of government for their safety and happiness.”

Now therefore I do recommend and assign Thursday the 26th day of November next to be devoted by the People of these States to the service of that great and glorious Being, who is the beneficent Author of all the good that was, that is, or that will be—That we may then all unite in rendering unto him our sincere and humble thanks—for his kind care and protection of the People of this Country previous to their becoming a Nation—for the signal and manifold mercies, and the favorable interpositions of his Providence which we experienced in the course and conclusion of the late war—for the great degree of tranquillity, union, and plenty, which we have since enjoyed—for the peaceable and rational manner, in which we have been enabled to establish constitutions of government for our safety and happiness, and particularly the national One now lately instituted—for the civil and religious liberty with which we are blessed; and the means we have of acquiring and diffusing useful knowledge; and in general for all the great and various favors which he hath been pleased to confer upon us.

and also that we may then unite in most humbly offering our prayers and supplications to the great Lord and Ruler of Nations and beseech him to pardon our national and other transgressions—to enable us all, whether in public or private stations, to perform our several and relative duties properly and punctually—to render our national government a blessing to all the people, by constantly being a Government of wise, just, and constitutional laws, discreetly and faithfully executed and obeyed—to protect and guide all Sovereigns and Nations (especially such as have shewn kindness unto us) and to bless them with good government, peace, and concord—To promote the knowledge and practice of true religion and virtue, and the encrease of science among them and us—and generally to grant unto all Mankind such a degree of temporal prosperity as he alone knows to be best.

Given under my hand at the City of New-York the third day of October in the year of our Lord 1789.

Go: Washington

Sarah Josepha Hale was editor of a magazine first called Ladies’ Magazine and later called Ladies’ Book from 1828 to 1877. It was among the most widely-known and influential magazines for women of its time. Hale wrote to Abraham Lincoln on September 28, 1863, suggesting that he set a national date for a Thankgiving holiday. From the Library of Congress, here’s a PDF file of the Hale’s actual letter to Lincoln, along with a typed transcript for 21st-century eyes. Here are a few sentences from Hale’s letter to Lincoln:

“You may have observed that, for some years past, there has been an increasing interest felt in our land to have the Thanksgiving held on the same day, in all the States; it now needs National recognition and authoritive fixation, only, to become permanently, an American custom and institution. … For the last fifteen years I have set forth this idea in the “Lady’s Book”, and placed the papers before the Governors of all the States and Territories — also I have sent these to our Ministers abroad, and our Missionaries to the heathen — and commanders in the Navy. From the recipients I have received, uniformly the most kind approval. … But I find there are obstacles not possible to be overcome without legislative aid — that each State should, by statute, make it obligatory on the Governor to appoint the last Thursday of November, annually, as Thanksgiving Day; — or, as this way would require years to be realized, it has ocurred to me that a proclamation from the President of the United States would be the best, surest and most fitting method of National appointment. I have written to my friend, Hon. Wm. H. Seward, and requested him to confer with President Lincoln on this subject …”

William Seward was Lincoln’s Secretary of State. In a remarkable example of rapid government decision-making, Lincoln responded to Hale’s September 28 letter by issuing a proclamation on October 3. It seems likely that Seward actually wrote the proclamation, and then Lincoln signed off. Here’s the text of Lincoln’s Thanksgiving proclamation, which characteristically mixed themes of thankfulness, mercy, and penitence:

Washington, D.C.
October 3, 1863
By the President of the United States of America.
A Proclamation.

The year that is drawing towards its close, has been filled with the blessings of fruitful fields and healthful skies. To these bounties, which are so constantly enjoyed that we are prone to forget the source from which they come, others have been added, which are of so extraordinary a nature, that they cannot fail to penetrate and soften even the heart which is habitually insensible to the ever watchful providence of Almighty God. In the midst of a civil war of unequaled magnitude and severity, which has sometimes seemed to foreign States to invite and to provoke their aggression, peace has been preserved with all nations, order has been maintained, the laws have been respected and obeyed, and harmony has prevailed everywhere except in the theatre of military conflict; while that theatre has been greatly contracted by the advancing armies and navies of the Union. Needful diversions of wealth and of strength from the fields of peaceful industry to the national defence, have not arrested the plough, the shuttle or the ship; the axe has enlarged the borders of our settlements, and the mines, as well of iron and coal as of the precious metals, have yielded even more abundantly than heretofore. Population has steadily increased, notwithstanding the waste that has been made in the camp, the siege and the battle-field; and the country, rejoicing in the consiousness of augmented strength and vigor, is permitted to expect continuance of years with large increase of freedom. No human counsel hath devised nor hath any mortal hand worked out these great things. They are the gracious gifts of the Most High God, who, while dealing with us in anger for our sins, hath nevertheless remembered mercy. It has seemed to me fit and proper that they should be solemnly, reverently and gratefully acknowledged as with one heart and one voice by the whole American People. I do therefore invite my fellow citizens in every part of the United States, and also those who are at sea and those who are sojourning in foreign lands, to set apart and observe the last Thursday of November next, as a day of Thanksgiving and Praise to our beneficent Father who dwelleth in the Heavens. And I recommend to them that while offering up the ascriptions justly due to Him for such singular deliverances and blessings, they do also, with humble penitence for our national perverseness and disobedience, commend to His tender care all those who have become widows, orphans, mourners or sufferers in the lamentable civil strife in which we are unavoidably engaged, and fervently implore the interposition of the Almighty Hand to heal the wounds of the nation and to restore it as soon as may be consistent with the Divine purposes to the full enjoyment of peace, harmony, tranquillity and Union.

In testimony whereof, I have hereunto set my hand and caused the Seal of the United States to be affixed.

Done at the City of Washington, this Third day of October, in the year of our Lord one thousand eight hundred and sixty-three, and of the Independence of the United States the Eighty-eighth.

By the President: Abraham Lincoln
William H. Seward,
Secretary of State

An Economist Chews over Thanksgiving

As Thanksgiving preparations arrive, I naturally find my thoughts veering to the evolution of demand for turkey, technological change in turkey production, market concentration in the turkey industry, and price indexes for a classic Thanksgiving dinner. Not that there’s anything wrong with that. [This is an updated, amended, rearranged, and cobbled-together version of a post that was first published for Thanksgiving Day 2011, during the year this blog began.]

Maybe the biggest news about Thanksgiving dinner this year is that the overall cost of a traditional meal is down 5% from last year–although higher than back in 2019. For the economy as a whole, the starting point for measuring inflation is to define a relevant “basket” or group of goods, and then to track how the price of this basket of goods changes over time. When the Bureau of Labor Statistics measures the Consumer Price Index, the basket of goods is defined as what a typical US household buys. But one can also define a more specific basket of goods if desired, and since 1986, the American Farm Bureau Federation has been using more than 100 shoppers in states across the country to estimate the cost of purchasing a Thanksgiving dinner. The basket of goods for their Classic Thanksgiving Dinner Price Index looks like this:

Over time, the total cost of the Thanksgiving dinner has more-or-less tracked the overall rate of inflation. As in most years, th eprices of some goods in the overall Thanksgiving dinner basket rise, while others fall. One change in the last few years is that the price of the turkey itself has become a smaller share of the total. The American Farm Bureau notes that the wholesale price of turkeys has actually rise, but enough stores are offering special deals on turkey that the retail price is lower.

Of course, for economists the price is only the beginning of the discussion of the turkey industry supply chain. This is just one small illustration of the old wisdom that if you want to have free-flowing and cordial conversation at dinner party, never seat two economists beside each other. The last time the U.S. Department of Agriculture did a detailed “Overview of the U.S. Turkey Industry” appears to be back in 2007, although an update was published in April 2014  Some themes about the turkey market waddle out from those reports on both the demand and supply sides.

On the demand side, the quantity of turkey per person consumed rose dramatically from the mid-1960s up to the early 1990s: for example, from consumption of 6.5 pounds of turkey per person per year in 1960 to 17.8 pounds per person per year in 1991. But since the early 2000s, turkey consumption has declined somewhat, falling to 16 pounds per person in 2019, 13.8 pounds per person in 2024, and projected to fall further in 2025. This ongoing fall in demand for turkey is surely part of the reason why turkey prices have sagged as well.

On the supply side, turkey companies are what economists call “vertically integrated,” which means that they either carry out all the steps of production directly, or control these steps with contractual agreements. Over time, production of turkeys has shifted substantially, away from a model in which turkeys were hatched and raised all in one place, and toward a model in which the steps of turkey production have become separated and specialized–with some of these steps happening at much larger scale. The result has been an efficiency gain in the production of turkeys. Here is some commentary from the 2007 USDA report, with references to charts omitted for readability:

In 1975, there were 180 turkey hatcheries in the United States compared with 55 operations in 2007, or 31 percent of the 1975 hatcheries. Incubator capacity in 1975 was 41.9 million eggs, compared with 38.7 million eggs in 2007. Hatchery intensity increased from an average 33 thousand egg capacity per hatchery in 1975 to 704 thousand egg capacity per hatchery in 2007.

Some decades ago, turkeys were historically hatched and raised on the same operation and either slaughtered on or close to where they were raised. Historically, operations owned the parent stock of the turkeys they raised while supplying their own eggs. The increase in technology and mastery of turkey breeding has led to highly specialized operations. Each production process of the turkey industry is now mainly represented by various specialized operations.

Eggs are produced at laying facilities, some of which have had the same genetic turkey breed for more than a century. Eggs are immediately shipped to hatcheries and set in incubators. Once the poults are hatched, they are then typically shipped to a brooder barn. As poults mature, they are moved to growout facilities until they reach slaughter weight. Some operations use the same building for the entire growout process of turkeys. Once the turkeys reach slaughter weight, they are shipped to slaughter facilities and processed for meat products or sold as whole birds.”

U.S. agriculture is full of examples of remarkable increases in yields over periods of a few decades, but such examples always drop my jaw. I tend to think of a “turkey” as a product that doesn’t have a lot of opportunity for technological development, but clearly I’m wrong. Here’s a graph showing the rise in size of turkeys over time from the 2007 report.

more recent update from a news article shows this trend has continued. Indeed, most commercial turkeys are now bred through artificial insemination, because the males are too heavy to do otherwise.

If this post whets your your appetite for additional discussion, here’s a post on the processed pumpkin industry and another on some economics of mushroom production. Good times! Anyway, Thanksgiving is my favorite holiday. Good food, good company, an always useful reminder to me to count my blessings, with no gifts, relatively little ceremony–and all these good topics for conversation. What’s not to like?

US Poverty and Policy

The US economy is the largest in the world, and at least among the large-population countries of the world (setting aside smaller economies strongly influenced by international capital flows like Monaco, Cayman Islands, and Ireland or by oil resources), it also has the highest per capita GDP. But at the same time, according to the US Census Bureau, 10.6% of the US population–that is, 35.9 million people–had incomes below the official US poverty rate 2024. Melissa S. Kearney and James Sullivan tackle the big questions of US anti-poverty policy in “Beyond the Myths: A Clearer Path to Poverty Alleviation in America” (forthcoming as a chapter in the book In Advancing America’s Prosperity, edited by Melissa S. Kearney and Luke Pardue, for the Aspen Economic Strategy Group).

Kearney and Sullivan start by reviewing well-known shortcomings of the standard poverty line: for example, it does not include non-cash benefits like Medicaid and food stamps, and it focuses on level of income vs. consumption. When you use alternative measures of poverty that focus on these issues, the US poverty rate has diminished substantially.

Thus, the top gray dashed line in the figure shows the official poverty line. The red line shows the alternative Supplemental Poverty Measure calculated by the Census Bureau, which includes non-cash benefits, and the bottom line measures consumption levels rather than income levels. The last two measures are “anchored” to produce the same poverty rate as the official measure in 1980, but then diverge substantially from the official rate over time. It seems fair to say that genuine progress has been made in reducing poverty, much more than is shown by measurements that use the official poverty line. They also provide data that the poverty rates for children, and for families with an unmarried parent and children, have fallen more dramatically than the overall figures here.

This figure shows measurements of specific indicators of well-being: how many rooms, having a dishwasher, air conditioning, a car, and so on. The red lines show the measures for those in the middle-income quintile; the blue lines show the bottom income quintile. Most of these measures have risen substantially in the last four decades.

Kearney and Sullivan also review the evidence on how much existing antipoverty programs reduce poverty: Medicaid, the Children’s Health Insurance Program, the Earned Income Tax Credit, the Child Tax Credit, Temporary Assistance to Needy Families, the Supplemental Nutrition Assistance Program, the Housing Choice Voucher program, and others. There are of course ongoing controversies over how to design these programs. But the changes in these programs over time are also clearly connected–by the timing of poverty reductions and the groups most affected at each time–to the lower poverty rates reported above.

The question of how to reduce poverty further is a tricky one. The authors emphasize that for some people, poverty is a sort of trampoline: you fall into it, you need some one-time help, but then you bounce out. But for other people, poverty is an ongoing state of life, or even an intergenerational condition. Kearney and Sullivan are skeptical of proposals that a guaranteed income boost will help such people. They write:

[T]he best evidence to date on guaranteed-income programs suggests that the provision of monthly income assistance does not result in recipients using the provided cash to make investments in their education or personal situation in ways that catapult them to economic self-sufficiency. The most compelling causal evidence to date on the likely effects of a guaranteed basic income in the US come from a pair of studies written in 2024 by a research team consisting of academic economists from four different universities and researchers from OpenResearch (Vivalt et al. 2024; Miller et al. 2024). These studies analyze the results of a large-scale randomized controlled trial (RCT)—the OpenResearch Unconditional Income Study (ORUS)—that ran between 2020 and 2023. As part of this RCT, three thousand low-income adults ages 21 to 40 in Illinois and Texas were randomly assigned into a treatment group of a thousand adults who received $1,000 in unconditional cash per month for three years and a control group of two thousand participants who received a much more modest $50 per month for three years.

As these studies report, a common pattern was that recipients of additional cash (and their partners) worked fewer hours, and did not show an “increase in time spent taking care of family members, volunteering, or pursuing education or training.” The additional income did not improve physical or mental health, either. Another basic income program aimed at low-income mothers didn’t show positive effects on child development. Although there is again room to think about how to redesign these kinds of basic income programs in the hope of better results: “Just giving people income is not a sufficient response to the challenge of persistent poverty.”

As an alternative, Kearney and Sullivan argue: “We propose four specific areas of investment: advancing skills and educational attainment; building strong families; addressing individual barriers to flourishing; and boosting upward mobility among poor children through investments in their nutrition, health, early childhood education, and housing.” Unlike the results of studies on guaranteed basic income, they present an array of evidence on programs in these areas that have demonstrated a substantial payoff. Moving forward, the strategy for fighting persistent poverty should build on the success of the existing programs, but focus more closely on targeted programs with measurable outcomes.

Semiconductors: Moore’s Law and Rock’s Law

The semiconductor industry is a model of ongoing and rapid technological change, with an ongoing race between increased technological capabilities and higher costs of building production facilities. Brian Albrecht, Geoffrey A. Manne, David Teece, and Mario A. Zúñiga provide a readable overview in “From Moore’s Law to Market Rivalry: The Economic Forces That Shape the Semiconductor Manufacturing Industry “(International Center for Law & Economics, November 12, 2025).

Back in 1965, Gordon Moore, one of the founders of Intel, put forth the proposition that the number of transistors on a computer chip would double every two years. Moore’s law, as it became known, has turned out to be almost eerily accurate since then, which in turn has driven computing power that is vastly cheaper over time. (For earlier posts about Moore’s law around the time of its 50th anniversary, see here and here.) For semiconductor firms themselves, Moore’s law is also a threat: keep up or be left behind.

But at about the same time, Moore also enunciated a second and lesser-known law, which he attributed to an Intel board member named Arthur Rock. Rock’s law was that the cost of building a semiconductor fabrication plant would double every four years–that is, not nearly as fast as the technical capacities of the underlying semiconducter chips, but still quite a meaningful increase. A single advanced semiconductor fab now costs north of $10 billion. The authors note that for the chipmaker TSMC, capital expenditures are typically 30–50% of revenue, and if you include R&D, total firm investment often exceeds 40–60% of revenue. Moreover, every cutting-edge new computer chip is actually a bundle of new techologies: “a multi‑front physics problem involving (among other things) lithography, new transistor structures, novel materials, power delivery, metrology [methods of measurement], and advanced packaging.”

Thus, the competitive dynamics of the semiconductor industry are a race between expectations of ever-rising technological gains and ever-rising costs of building a plant. This is clearly not an industry for the faint of heart or the lightly capitalized.

The resulting industry structure has involved an evolution from chip firms that both designed and made chips to separate firms for chipo design and manufacturing: that is, some of the best-known “semiconductor” companies design chips, but do not actually make tthem. The authors write:

As the cost and technological demands of manufacturing at the bleeding edge became unsustainable for non-specialized firms, the development of standardized design tools and collaboration routines lowered coordination costs. These enabled a separation of design and manufacturing which, in turn, facilitated valuable specialization and risk-sharing: Independent foundries aggregate demand across multiple customers, achieving economies of scale difficult for IDMs [integrated design manufacturers] to reach, while also keeping their expensive fabrication equipment fully utilized.

This specialization created concrete economic benefits. Independent foundries achieve higher equipment utilization by serving diverse customers with different demand cycles. They spread enormous R&D costs across multiple clients rather than bearing them alone. The fabless model allowed companies like Nvidia to focus entirely on GPU architecture without operating fabs, while Qualcomm could specialize in wireless chips, and Broadcom in networking semiconductors. Meanwhile, each could access the same cutting-edge manufacturing technology that previously only giants like Intel could afford.

These sophisticated relationships are governed by relational contracts—contractual relationships whose precise terms depend on cooperative adjustments by the parties over time—and complex governance structures that solve extraordinary coordination challenges more efficiently than vertical integration or spot-market transactions. When capital investments reach tens of billions of dollars per facility and must be committed years before demand is certain, both foundries and their customers make relationship-specific investments that create bilateral dependence.

You can read the article by Albrecht, Manne, Teece, and Zúñiga for details on how these forces have shaped the evolution of the semiconductor industry. Here, I will just mention a few broader thoughts about For additional details on the chip industry, I’d also recommend the article by Chad P. Bown, and Dan Wang, “Semiconductors and Modern Industrial Policy” in the Fall 2024  Journal of Economic Perspectives (where I work as Managing Editor).

The “semiconductor industry” is not well-described as a single industry in which firms make reasonably similar products–unlike, say, the car industry. Instead, it’s an interwoven and overlapping mixture of equipment suppliers, chip designers, chip manufacturers, and end-users. The main focus of US industrial policy has been the manufacturing portion of the chip industry, but again, that can only be understood as part of a broader eco-system.

In short, it’s nice that TSMC is building advanced semiconductor manunfacturing plants in the Arizona desert. But competition in the semiconductor industry moves fast and along these multiple dimensions. Half and more of the revenue from the current factory needs to be immediately reinvested, so you are rebuilding new waves of technology on a continual basis. Competition in the semiconductor industry is about making enormous financial investments, funded across ever-shifting groups of intertwined and collaborating firms, and hoping you get it right more often than not. Meanwhile, in the competitive race of the semiconductor industry, the finish line keeps retreating ahead of you.

Are the Entry-Level Jobs Drying Up for Young Adults?

Several recent reports have noted that unemployment rates for recent college graduates have been on the rise, which in turn has led to speculation that perhaps the new AI tools are already leading to fewer opportunities for such workers. Alexander Cline and Barış Kaymak of the Cleveland Fed provide some useful nuance to the discussion of employment among college graduates as compared to high school grauduates in “Are Young College Graduates Losing Their Edge in the Job Market” (Economic Commentary, November 24, 2025), although they tread lightly (as is appropriate) about assigning causes.

The left hand figure shows that unemployment rates for those with only a high school degree are consistently higher than for those with a college degree. This data includes only young adults in the 22-27 age group. However, the size of the unemployment gap varies. Since the aftermath of the Great Recession back around 2010, the two rates have been generally converging (except during the pandemic). At present, the gap is the lowest it has been for a half-century.

A glass-half-full type would also point out that although the pattern over the last 15 years is striking, most of the convergence between these unemployment rates has happened because the unemployment rate for those with only a high school degree spiked so violently dufing the Great Recession. Also, although the gap between unemployment rates for the two groups is small, it was also quite low in the late 1970s, late 1980s, and late 1990s. Also, comparing unemployment rates for these two groups doesn’t take into account how share of those attending college has risen in the last half-century.

We can dig into these labor market patterns more deeply. There is something called the “unemployment exit rate,” which shows in a given month how many of the unemployed depart from that category. Again, the patterns show here are for the 22-27 age group. The figures use two different methods to calculate unemployment exit rates (details on method in the article). Here, the key point is the historical pattern was that young-adult college graduates were more likely to exit unemployment than high school graduate, but around 2019, this pattern flip-flopped. Since then, unemployed young adult high school graduates have become more likely to leave unemployment than the college graduates.

In theory, one could exit unemployment either by finding a job, or by just not looking for a job any more (in which case you are counted as “out of the labor force,” rather than unemployed). But the author show that most of the shift is that high school graduates have become more likely to find jobs.

The authors argue that labor demand in the US economy has in fact shifted: labor demand used to favor those with higher education levels, but now it has shifted to education-neutral growth in labor demand. They write:

Although the narrowing unemployment gap was noticed recently, the underlying factors that contribute to this trend have been operating for much longer. The prolonged jobless recovery after 2008 particularly affected high school graduates, obscuring the secular convergence of job-finding rates between college-educated and high-school-educated workers. It does not appear that recent developments are attributable to postpandemic factors alone. … Developments related to AI, which may be affecting job-finding prospects in some cases, cannot explain the decades-long decline in the college job-finding rate.

It remains true that when a young adult with a college degree does find a job, it pays better than when a young adult with only a high school degree does so. But one might say that young adult college graduates used to benefit from both a lower unemployment rate and higher wages, but the advantage of a lower unemployment rate has been declining, so now the main labor advantage is higher wages after finding a job.

Do Exclusionary Preferences Explain Why Some Americans Are Happy about Paying for Tariffs?

The evidence that the Trump administration tariffs are raising prices paid by consumers continues to accumulate, most notably in the recent decision by the Trump administration to exempt several hundred food products from tariffs to make them more affordable for US households. (After all, if the tariffs were all paid by producers in foreign countries as originally promised, reducing the tariffs would only benefit those same foreign producers, not US consumers.)

A majority of the American public believe the Trump tariffs are pushing up prices, too. Here are results from one recent poll by ABC News/Washington Post/Ipsos:

Another recent poll by Economist/YouGov found similar results.

  • Americans are more likely to favor the U.S. increasing than decreasing trade with foreign countries (42% vs. 11%); 27% think the amount of foreign trade should be kept the same
  • A majority (55%) of Americans think that foreign trade makes the average American a lot or somewhat better off; 15% think it makes the average American worse off and 11% think it has no effect
  • Only 13% of Americans want tariffs on foreign goods to be increased; 47% would prefer for them to be decreased and 24% want them to be kept the same
  • Nearly three-quarters (73%) of Americans say that Trump’s tariffs have increased the prices they’ve paid either a lot (40%) or slightly (33%)
  • Majorities of Democrats (92%), Independents (72%), and Republicans (56%) say they’ve paid higher prices as a result of Trump’s tariffs

But these survey results still leave open the question of why many Americans continue to support the tariffs, even though at least some of them recognize that they are paying higher prices for doing so. Alex Imas, Kristóf Madarász, and Heather Sarsons propose an answer in the form of what they call “exclusionary preferences.” A Research Brief with a readable overview and the full working paper are both available.

The basic idea of “exclusionary preferences” is that, for a plurality of people, “We propose that a person’s desire to consume an object or possess an attribute increases in how much others want but cannot have it.” The first step of their research strategy is to see how much people in a laboratory research setting are willing to pay for a good if others in the study are excluded from purchasing the good. For a number of people, the more that others are excluded, the more they are willing to pay.

A second step is then to compare the beliefs of those with strong exclusionary preferences to those who do not have such preferences. Here are some of the findings (as summarized in the Research Brief):

  • Exclusionary preferences strongly predict tariff support, but only when tariffs harm trading partners. Those with exclusionary preferences are 12.3 percentage points more likely to support a 15% tariff that would raise prices domestically. When respondents are told the tariff would not harm the foreign country, support between those with and without exclusionary preferences is statistically indistinguishable. …
  • Exclusionary preferences predict support for a broad range of protectionist policies that harm domestic consumers. Beyond tariffs, those with exclusionary preferences are significantly more likely to support policies explicitly designed to maintain consumption gaps between nations, even when informed these policies would raise prices for Americans. They also show higher support for restricting foreign investment, emphasizing that the US should “come out on top” in trade relations, and limiting purchases from foreign countries. These patterns held across different trading partners (China, Mexico, and Canada), suggesting the effects are not driven by hostility toward specific nations.
  • The relationship between exclusionary preferences and policy support is not explained by political ideology or cognitive biases. While political preferences partially mediate the relationship (Democrats are less likely to hold exclusionary preferences), the core association remains strong and statistically significant after controlling for party affiliation and zero-sum thinking (a cognitive bias where people believe gains for some come at others’ expense). 

These results remind me of an intro-level teaching tool when studying economic growth. The lecturer can ask the class for their preferences between two options. In option 1, the US economy grows 3% while other economies around the world grow 4%. In option 2, the US economy grows 2% while other economies around the world grow 1%. For most economists, option 1 is the obvious choice for the US, because the US rate of growth is faster than in option 2. For many students, option 2 is the obvious choice, becuase the US rate of growth is faster compared to other countries, unlike option 1 where the US rate of growth is slower compared to other countries. But the idea that some people have exclusionary preferences based on others receiving less has widespread application.

US Tariff Revenues in Perspective

How much additional money is the US government raising with its tariffs? Jay Shambaugh offers some useful perspective in “Tariffs are a particularly bad way to raise revenue” (Brookings Institutions, November 5, 2025).

This figure shows tariff revenues as a share of GDP in the US and the OECD countries (that is, mainly high-income countries) since 2005. As you can see, average pre-Trump tariff revenues for these countries were around 0.2% of GDP. The US level bumped up a bit in the first Trump administration, but since has skyrocketed: the final US data point is monthly data for August 2025.

Total US federal revenue in the last 50-60 years has been about 16-18% of GDP. Against that base, an increase of 1% of GDP is notable.

How does the new US level of tariffs rank internationally? Shambaugh offers this useful figure. The US level in 2022 and in August 2025 are shown in yellow. The green bars show higher income developed economies, while the blue bars show lower-income deveoping countries. Low-income countries tend to have higher tariff revenues as a share of GDP for several reasons. One is that trade is often a higher share of their total economy, so a given level of tariffs brings in more money: this can be especially true for small island economies. Also, developing countries often have a larger part of their economy in the “informal” sector, which is administratively difficult to tax. By contrast, taxing what comes over the border is comparatively easy. Shambaugh writes: “At 1.2 percent of GDP, the U.S. would be joining countries like Zambia and Tunisia, and closing in on Sierra Leone (though well below the small island states like Vanuatu).”

Much of Shambaugh’s essay explains why tariffs are bad for consumers, bad for multinationals, and bad for economic growth, and for those who are not already bored and exhausted by my own attempts to explain these reasons, I commend his explanation to you. I’ll just add that if this particular federal tax increase is the only political acceptable one for the current configuration of US politics, then it would probably be prudent to commit the additional revenue to lower budget deficits over time (and to offset part of the reduction in federal revenues from the “One Big Beautiful Bill Act” signed into law in July 2025), rather than treating it as “free” for the spending.