India and Its Service Sector

Each year, India’s Department of Finance published a top-to-bottom overview of the countries economic situation. As I paged through the Economic Survey 2025-26, I was especially struck by some of the comments about the role of the service sector in India’s economy, and the role of India’s service sector in the world economy.

As background, it’s perhaps useful to know that economic development around the world has broadly followed a similar pattern. In a low-income country, most of the workers are in subsistence agriculture. Next comes a push-pull effect: higher productivity in agriculture means that not as many workers are needed on the farm, and available jobs in low-wage manufacturing (textiles is a common example) offer an option for those workers. However, workers and products become more sophisticated. High-wage skilled manufuring develops, which often uses high levels of capital investment, and workers shift over to service industries. There is some argument that a next step may be “information” industries, although this category often overlaps with services.

But in some areas of India, a high-skill service sector has unexpectedly sprinted ahead–it has arrived early in the process of economic development. But can economic development be based on the service sector? Can the service sector provide widespread jobs across the economy? Or is it a useful and positive but ultimately niche industry–such that India should be focusing on the conventional path of building up low-wage manufacturing and agricultural productivity? The Economic Survey offers some background and thoughts on these issues.

(In the shade of this parenthesis, I was amused that the report includes near the start an 18-page list of acronyms: everything from AAGR Average Annual Growth Rate, AAI Airports Authority of India, and AAM Advanced Air Mobility, to XV-FC Fifteenth Finance Commission, YES-TECH Yield Estimation System based on Technology, and YoY Year-on-Year. And yes, before you ask, I do amuse easily.)

Here’s how V. Anantha Nageswaran, the Chief Economic Advisor for the Government of India, expressed concerns about the role of India’s service industry in development in his “Preface” to the report:

India’s export performance since the start of the millennium tells its own story. In general, services exports have outpaced goods exports. In particular, over the five years since 2020, the compounded annual growth rate of total exports has been 9.4%, while that of merchandise exports has been only 6.4%. Services have done much of the heavy lifting, creditable and macro-stabilising, but not a substitute for the goods-based export ecosystems that ultimately underpin durable external and currency stability.

The Information Technology-Enabled Services Sector has been India’s mainstay for growth and exports since the dawn of the millennium. International experience indicates that while service exports are economically valuable, they do not systematically compel broad upgrades in state capacity, as successful firms can bypass weak institutions, relocate easily, and generate limited economy-wide pressure on governments to reform. Unlike manufacturing exports, they do not impose hard fiscal, employment, or logistical constraints on the State, allowing institutional weakness to persist even alongside globally competitive firms. So, manufacturing matters.

In this view, a key role of manufacturing is that it imposes constraints on the government, pressuring the government to overcome its institutional weaknesses. The later chapters of the report on India’s services and manufacturing industries provide more detail. The report notes:

Notably, while global goods trade has stagnated, services trade has continued to expand, reinforcing its role as a critical buffer against external shocks and volatility. The rapid expansion of digitally and remotely deliverable services has allowed the services sector to scale swiftly by transcending traditional geographical and operational constraints. This has enabled new forms of value creation, particularly in knowledge-intensive and technology-enabled activities, helping even smaller economies leave a global imprint. In contrast to the typical manufacturing-led growth paths followed by other countries at a comparable stage of development, India has experienced service-led growth at a significantly lower level of per capita income.

Currently, India’s services sector contributes more than half of the Gross Value Added and serves as a major driver of exports and employment in the country. Not only does it continue to underpin domestic growth, but services have also emerged as the most stable and resilient component of GDP, acting as a high-growth, low-volatility anchor, as is the case across the globe. The sector has recorded average annual growth of around 7-8 per cent year after year, in sharp contrast to the more pronounced cyclical fluctuations observed in agriculture and industry.2 India is the world’s seventh-largest exporter of services, with its share in global services trade more than doubling from 2 per cent in 2005 to 4.3 per cent in 20243, and the sector continues to be the largest recipient of foreign direct investment inflows.

Like the US economy, India consistently runs a trade deficit in goods, but a trade surplus in services. Here’s an illustration of India’s trade surplus in services–which is heavily focused on business and software services.

Finally, it’s useful to remember–whether in India or the United States–that the services economy is close intertwined with manufacturing of higher-value goods, to the benefit of both. In the case of India’s economy, the report introduced me to my ugly word of the day, “servicification”:

[S]ervices are increasingly integrated into manufacturing through activities such as design, R&D, logistics, software development, and professional services, reflecting the growing “servicification” of production systems. This is evident in products such as smart devices, whose value is driven by software ecosystems; medical equipment/wearables bundled with diagnostic and remote-monitoring services; and automobiles, which are increasingly described as “software on wheels”. As manufacturing becomes increasingly technology and data-intensive, services such as ICT, finance, compliance, and after-sales support account for a growing share of value creation. International experience suggests that this integration is a crucial channel for enhancing value addition, export competitiveness, and employment.

The Flaws of Overconfident AI

The new AI tools are remarkably good at many tasks, and getting better, but one area where they fall badly short is in expressing their level of confidence in the answers they provide. The great mathemetician Terence Tao made this point recently in an interview, in an interview with Matteo Wong about the use of AI tools to suggest answers for unsolved problems in mathematics (“The Edge of Mathematics,” Atlantic Online, February 24, 2026). Tao said:

One very basic thing that would help the math community: When an AI gives you an answer to a question, usually it does not give you any good indication of how confident it is in this answer, or it will always say, I’m completely certain that this is true. Humans do this. Whether they are confident in something or whether they are not is very important information, and it’s okay to tentatively propose something which you’re not sure about, but it’s important to flag that you’re uncertain about it. But AI tools do not rate their own confidence accurately. And this lowers their usefulness. We would appreciate more honest AIs.

Additionally, a lot of AI companies have this obsession with push-of-a-button, completely autonomous workflows where you give your task to the AI, and then you just go have a coffee, and you come back and the problem is solved. That’s actually not ideal. With difficult problems, you really want a conversation between humans and AI. And the AI companies are not really facilitating that. If we can work with at least some tech companies that are willing to develop more interactive platforms, that will be much more readily embraced by the people.

Of course, Tao’s points are interrelated. You can only hand off a job entirely to the AI tool if you are entirely 100% confident that the result will be correct and appropriate. Handing off arithmetic to a spreadsheet program is fine. Handing off the strategic priorities of a company to an AI program is something else.

Moreover, it has seemed to me that the focus of AI companies on completely autonomous workflow is not only unrealistic (because “completely autonomous” means assuming perfect trustworthiness for the AI), but also a public relations disaster. Instead of emphasizing how workers can use AI tools to improve productivity, the companies tend to emphasize how AI can replace workers. As one example, a company called Genspark ran an ad during the Super Bowl about how workers could all take the Monday off after the big game, and just let the AI tool do their jobs that day. Cute! But of course, the obvious question is why any company will want to hire workers to return to their easily-replaced jobs on Tuesday, Wednesday, Thursday, or Friday.

David Autor had a pleasantly acerbic comment on this “replace all the workers” mindset in an interview published earlier this year. Sara Frueh was interviewign Autor on the subject: “How Is AI Shaping the Future of Work?” (Issues in Science and Technology, January 6, 2026). Frueh says: “I was looking at OpenAI’s mission statement, which is, `To ensure that artificial general intelligence, by which we mean highly autonomous systems that outperform humans at most economically valuable work, benefits all of humanity.’ Is that mission statement made up of two mutually exclusive goals? … Can you build something that takes away the jobs of most humans—if that’s possible—and still benefit all humans?” Autor reacted:

Or even should that be your goal? Actually, I don’t even like that definition of artificial general intelligence. The goal of machines should not be to just do what people do slightly better. Our tools are valuable to us because they allow us to do things we can’t do. So many technologies enable capabilities that we simply don’t possess. Powered flight didn’t automate the way we used to fly. We just didn’t fly.

And that’s true for most of modern technologies. They’re important and consequential, not because they do the same old thing better, cheaper, faster, but because they enable us to do things we couldn’t do, telecommunication, flight, fighting disease with penicillin, seeing the insides of the interiors of subatomic particles, designing, computing things that we could never in a lifetime compute.

So I actually, I find their mission statement an amazing bait-and-switch. Artificial intelligence, by which we mean a machine that outcompetes humans in every domain. I’m reminded of this tweet I once saw that said, `We’re a modest company with modest goals. One, sell a quality product at a fair price. Two, drain the world’s ocean so we can find and kill God.’ And that’s what I feel like when I read the OpenAI mission statement.

An honest AI can be wrong, but it won’t be declaratively and definitively wrong. It would convey its level of uncertainty, and try to point out where additional input and feedback would be useful in moving toward a stronger conclusion. It would replace some of the tasks that workers currently do, but it would also expand the tasks that workers are able to do.

Bending the Curve of Health Care Costs (At Last?)

Health care spending had been a rising share of US GDP for decades, but since about 2010, the rate of increase seemed to level out. David M. Cutler and Lev Klarnet address “Has the United States bent the health care cost curve?” (Brookings Papers on Economic Activity, Spring 2026, readable overview of paper at link, including a follow-up link to the paper itself).

Here’s an image to summarize the issue. Back in 1960, health care spending was about 5% of US GDP. But 2010, it was more like 17% of GDP (and remember, US GDP is a lot higher in 2010 than in 1960, so this is a rising share of a rising amount). But if you project forward from 2010, the rate of increase looks much slower.

Here’s a summary of the Cutler and Klarnet findings:

Aggregating across a number of data sets and analyses, we highlight the role of five central factors in the [health care] spending slowdown. First, technological innovation has become more likely to save money over time. This shows up in medications that prevent acute events and in surgeries that can be performed cheaper and with fewer complications. We estimate the development of cost saving technology accounts for about 21% of the spending slowdown. Second, demand for some types of care has fallen. Demand changes might be due to changes in reimbursement, higher cost sharing, and tighter insurer restrictions on utilization. Together, demand changes accounts for 10-26% of the spending slowdown, with the range reflecting economic uncertainty about effects. Third, long-run supply elasticities tend to be greater than short-run supply elasticities, leading to price reductions over time. Examples of this include pharmaceuticals going off patent and imaging prices declining. These account for 6% of the spending slowdown. Fourth, health status has improved in other ways that we do not understand, but that may be due to reduced smoking and other preventive care. A healthier population needs less care than does a less healthy one. These trends account for 7% of the spending slowdown. Fifth, there was a reduction in the rate of price growth, from 1-2% above general inflation to about general inflation. This results in about 24% of the spending slowdown. …

Considering our primary question, we conclude that the US has bent the health care cost curve. The role of technology in particular is fundamentally different from what it was in the past, and that means that cost growth has slowed relative to the past. That said, the cost curve has not bent as much as it could, or as much as it needs to.

The Cutler and Klarnet analysis of a slowdown in health care spending growth per person up through 2024 does not address the question of how an aging population is like to affect US health care spending moving forward. Economists at the Centers for Medicare and Medicaid Services do an annual forecast of health care expenditures looking ahead a decade, and I reviewed their most recent predictions last summer in “US Health Care Expenditures: An Ominous Trend Returns?” (July 1, 2025).

The CMS economists forecast that the aging of the population and the growth of Medicare enrollment are going to cause rise in health care spending from 17.6% of GDP in 2023 to 20.3% of GDP in the next decade. This rate of increase (that is, a rise of 2.5% of GDP in a decade) is similar to what happened from 1960 to 2010. Thus, one can think of the future of US health care spending as a tug-of-war between the Cutler-Klarnet list of factor that have been tending to hold down the rise in health care spending and the government forecasts of how population aging will tend to drive up health care spending.

Neither the rise in health care spending as a share of GDP nor the flattening of the curve has been solely a US phenonomen. Sheila Diane Smith and Joseph P. Newhouse take an international view on the topic in “Health-Care Spending Growth Has Slowed: Will the Bend in the Curve Continue?” (American Journal of Health Economics, Winter 2026, pp. 1-34). They offer this figure as a starting point. The top line shows US health care spending as a share of GDP. (The US line doesn’t match the line above because the OECD data shown below is defined differently from the measure used in the US GDP, but defined in a way that is identical across countries–thus offering a better basis for comparison.) The lower line shows an average for 19 other high-income countries. As they point out, the rise in health care spending as a share of GDP flattened out in these countries as well.

Smith and Newhouse summarize their findings based on the international data from 1970 up through 2019 (thus sidestepping effects of the pandemic on health care spending across countries starting in 2020). They write:

We find the 2009–19 slowdown [in the rise of health care costs] can be explained by lagged effects of the worldwide Great Recession, a decline in the contribution of exogenous technological change of 1.1 percentage points that began around 2004, and, in the US, a fall in medical prices relative to economy-wide prices. As in our earlier work, income and medical technology remain the main drivers of health-care cost in the entire 1970–2019 period, accounting for 43 percent and 35 percent of the growth in the US, respectively, and 57 percent and 21 percent of the growth in the OECD ex-US. …

Putting it all together, we project a post-COVID trend in real per capita growth in annual health-care spending for the United States of 2.6–2.7 percent from 2028 to 2038 versus 4.3 percent for 1970–2009, and 2.1–2.3 percent for the OECD ex-US from 2028 to 2038 versus 3.8 percent for 1970–2009. … This means the share of GDP in health care will increase on average by around 0.2 percentage points per year in the US and 0.1 percentage points in the OECD ex-US, with variation by country.

Thus, Smith and Newhouse argue the curve for health care spending as a share of GDP has been bent, but has not flattened out. Indeed, they project that US health care costs as a share of GDP will continue to rise faster than the average of the other 19 countries in their sample–rising another 2 percentage points per decade moving forward.

It’s worth emphasizing how much rides on the question of whether this shift toward flattening out the curve of health care spending as share of GDP turns out to be lasting, and perhaps even grows stronger. Government spending on health care at both the federal and state level (think Medicare and Medicaid) is a primary driver of future fiscal stress. An end to continuous increases in government health care spending would reduce that stress. Many US workers don’t see rising health care costs very clearly, because they are receiving employer-provided health insurance. But from the employers’ view, the amount they spend on health insurance for employees is part of the overall compensation package, and when employers need to pay more for health insurance, they have less to spend on take-home wages. From my own perspective, looming over all of these projections are the uncertainties about how many elderly people will–a decade or two from now–be needing some form of assisted care and the potential costs of those services.

When Keynes Did Not Win the Nobel Peace Prize

On rare occasions, the Nobel Peace Prize has been given for work with direct and strong economic implications. For example, I think of the 2006 Prize given to Muhammad Yunus and Grameen Bank for their work on microfinance loans to the poor, or the 1970 prize to Norman Borlaug for his research leading to the Green Revolution in agricultural productivity. However, I am unaware of any example of the Peace Prize being given to an economist for writing a book or article, so I am unsurprised by the decision not to award the prize to John Maynard Keynes for his 1919 book The Economic Consequences of the Peace, which forecast that the harsh terms and punitive reparations placed on Germany after the end of what was then known as the Great War could easily lead to a second World War.

However, Lars Jonung has gone spelunking in the Nobel archives and discovered that “Keynes was nominated for the Peace Prize three years in a row, formally evaluated, and placed on the Committee’s shortlist.” Apparently the nomination originated with “by a group of German economics professors in Munich three years in a row, in 1922, 1923, and 1924.” Jonung tells the story in “Why Keynes did not win the Nobel Peace Prize for `The Economic Consequences of the Peace'” (CEPR.org, March 5, 2026).

Apparently, Keynes’ candidacy for the Peace Prize was taken seriously enough that for two years in a row “the Nobel Committee commissioned an advisory report. The task to serve as appraiser (konsulent) fell in September 1923 to Wilhelm Keilhau, a young economist at the University of Kristiania (now Oslo).”

Apparently, the 1923 report from Keilhau focused on The Economic Consequences of the Peace, and praised it heavily. But then Keynes published A Tract on on Monetary Reform in 1923, and so the 1924 report from Keilhau spent space arguing that Keynes was wrong to reject the gold standard. In addition, a dispute involving Keynes exploded out of the supposed secrecy of the Nobel Committee deliberations and into the newspapers.

Following Woodrow Wilson’s death in 1924, historian Jacob Worm Müller – a fellow adviser to the Nobel Committee – published an article in the Oslo daily newspaper Dagbladet accusing Keynes of distorting Wilson’s role at Versailles in The Economic Consequences of the Peace. He called Keynes’s depiction “a lie”, charging that Keynes had never attended the meetings he claimed to describe. The attack was harsh and personal.

Keilhau responded in the same newspaper, defending Keynes point by point. Worm Müller shot back. Over the following weeks, the two advisers engaged in an acrimonious fight in the pages of Dagbladet, a spectacle involving two men who were supposed to be discreet, impartial advisers to the Nobel Committee. To settle the matter for his 1924 report, Keilhau did something unusual: he sought evidence directly from two of the participants at Versailles, one who had served as a secretary and one as an interpreter, as well as from a meeting with Keynes in September 1924 in London. Keilhau concluded that while Keynes had perhaps overstated the regularity of his attendance, he had not fabricated his presence.

Ultimately, no Nobel Peace Prize was awarded to anyone in 1923 or 1924. Looking back, whatever the prescience and brilliance of Keynes’ 1919 book, it was a prediction of future conflict and war. If the Peace Prize was given to the most brilliant predictions of future conflict, rather than for (often imperfect) efforts to reduce and resolve conflict, the historical list of prize-winners would look rather different.

The Growth of Big Business: 1900-2020

Big firms are playing a larger role in the US economy–but also in the economies of high-income countries around the world. Yueran Ma, Mengdi Zhang, and Kaspar Zimmermann compile the evidence in “Business Concentration Around the World: 1900-2020” (University of Chicago Becker-Friedman Institute for Economics, February 27, 2026, the link offers a readable overview and a followup link to the full working paper). The authors write:

In this paper, we document two sets of facts about the evolution of the organization of production over the past century. These facts hold broadly, across a variety of market-based economies where we can find comprehensive long-run data on the firm size distribution. First, sales, net income, and equity capital have become increasingly concentrated in the largest firms. In many countries, the largest 1% firms by sales now account for around 80% of economy-wide sales, up from around 50% in the early 20th century. The long-run increases of concentration also hold at the industry level. Second, employment concentration has been relatively stable. The largest 1% firms by employees account for roughly 50% of economy wide employment throughout the 20th century. One exception is retail/wholesale trade, where employment concentration has risen almost as much as sales concentration. These pervasive patterns … show that the rising dominance of large firms is a widespread phenomenon, not limited to the recent decades or the United States. Moreover, large firms scale not so much with labor, and possibly more with capital (except in industries like retail where expanding automation has been more challenging thus far).

So the top 1% of firms represent a rising share of total sales over time (now up around 80%), but a fairly stable share of total employment (around 50% in the last century). This is not a US pattern, but a common pattern across high-income countries, which in turn suggests that it does not have a US-based cause. The authors suggest that the most likely explanation for this pattern is that large firms tend to be those who are able to scale up by using capital investment, rather than additional hiring.

An obvious follow-up question is whether this overall pattern is good for the economy, or for consumers, or for workers. This paper doesn’t seek to tackle these big-picture questions, but it does pointedly note that greater firm size alone is insufficient to prove that consumers or workers are worse off (or better off, for that matter).

Here is a standard conundrum about the extent of actual competition. Say that in a certain market there are 3,143 firms across the country, but they are geographically distributed so that there is one per county across the United States. In another market there are only five firms, but all five firms compete in every county in the United States. If this market is one where buyers typically buy within their own county, then consumers in the market with fewer total firms for the United States as a whole might be experiencing a higher level of actual competition. The extent of competition will depend on the size of the relevant market. It may be that as transportation and communications links have improved, along with the logistics chains that allow near-immediate shipping to both consumers and businesses, competition can be tougher even with fewer and larger firms.

It’s also possible to consider two broad paths for an economy. In one path, larger firms have the opportunity to grow and expand when they produce more efficiently, and if these firms face a sufficient degree of competition, the cost savings will be passed along to conumers over time. In an alternative path, the economy consists of many small and local firms that on average produce less efficiently. These small and local firms also need to be sheltered from competition, both from bigger firms and also from growth by other small and local firms that are more popular or efficient firms, so consumers will pay higher prices over time. The first scenario also involves rising productivity for workers in the big firms, while the second involves flat productivity for the workers in the small and local firms. The first scenario involves disruptive economic change, with some firms expanding and others being bought out or driven into bankruptcy. The second scenario has less disruptive change, but is also a scenario of stasis and low growth.

Personally, I prefer the first scenario, with its accompanying tradeoffs. I can understand and to an extent sympathize with those who prefer the second scenario–as long as they are also willing to acknowledge the tradeoffs of their choice.

For a US illustration of some of these forces at work, consider this table of the top 10 US companies by sales in 2025, 35 years earlier in 1990, and 35 years before that in 1955. Notice that from 1955 to 1990, five of the top ten companies appear in both years. However, from 1990 to 2025, only two of the same companies appear–because Exxon and Mobil from the 1990 list had merged by 2025. I tend to worry more about anticompetitive effects of big business when more of the same firms are staying at the top for decades, rather than when there is at least some slow-motion reshuffling.

(For the 2025 list, I’ll add that I had no idea what line of business Cencora was in, although it is apparently the 10th-biggest US company by sales. I was mildly relieved to learn that company name didn’t exist until 2023, when it was renamed from AmerisourceBergen, a name that was in turn the result of 2001 merger of AmeriSource Health and Bergen Brunswig. Cencora does distribution and wholesale for pharmaceuticals, along with some contract research.)

Interview with Erika McEntarfer: Firings and Federal Statistics

Erika McEntarfer was Commissioner of the U.S. Bureau of Labor Statistics until August 1, 2025, when she was abruptly fired by President Trump. Neale Mahoney of the Stanford Institute for Economic Policy Research discusses the experience with her and what comes next for federal statistics in “The hidden backbone: The data behind the economy” (SIEPR, “Econ to Go,” March 26, 2026). Here are a few of the comments that caught my eyes:

On her earlier interaction with the Department of Government Efficiency, widely known as DOGE:

I actually had a whole basket of AI related projects when DOGE arrived early in the administration. The early word was they were gonna help us with AI. And I was like, “Great, we could use some more resources here.” So, you know, I had this whole list of projects for them and instead I wound up sitting across the table from a member of DOGE and they were like, “So we want to fire all of these statisticians and replace them with the AI.” And I was like, “I don’t think that’s actually possible, but if you can explain to me how it is possible, I am all ears.” And then they would just stare at me blankly and tell me that I was not cooperating with their vision. I was like, “No, I don’t actually understand how you replace a time series statistician with an AI model, but if you can explain it to me, I’m all ears.”

How the actual firing happened:

So it was jobs day for the July release. We had been you know, we’d spent the morning at the Labor Department briefing the Labor Secretary the day before. We had briefed the White House on the data and, you know, by afternoon on a jobs release, things are starting to wind down a bit. And we were having a social gathering in our office for the staff. And I looked down at my phone and I saw that I had missed an email from a reporter, who wanted to ask me about something Trump was tweeting, and I was like, “Oh, let me go check this in my office.” And so I walked down the hall and I opened up this email, and by then I had a few from, I think it was an NBC reporter, and he was like, “Do you wanna comment on this tweet that the president, that he is going to fire you for the jobs numbers?” And I have to say, my first thought was, I thought he was just threatening. I assumed I hadn’t actually been fired, so I started thinking together a calm strategy. I’m like, “Oh, it’s Friday afternoon.” I gotta assemble a team here to, like, deal with this, and I’m already, like, 10 minutes into this thought process. When I look and I realize, oh, I have missed some other emails during this gathering, and one of them was from the presidential personnel office, and it was a termination letter. And I was like, “Oh, I am actually fired.” Okay, that’s a whole different crisis than the one I thought we were entering. It was an unbelievably crazy moment because all, like, while what I just described to you was happening, my phone is completely blowing up, because this has hit the media, it’s on television, people are texting me, people are emailing me, my family is calling me, like everyone is just reaching out all at once and my phone is just, all my phones are just buzzing and buzzing and buzzing and buzzing. And it was just wild.

What’s next for federal statistics?

So the interesting thing about the US statistical system and particularly economic data is you have to keep two thoughts in your head simultaneously. And one is that US economic data is actually very, very good. … It is the envy of the world. The richness of the data, the timeliness of the data, it’s really hard to match. If you do international comparisons, you’ll discover very quickly how advantaged we are. The other thing that you have to keep in your head is that the system is in a certain amount of danger in terms of its sustainability. And those dangers are fiscal. So the costs of fielding surveys are increasing, but the budgets are not keeping up with those costs. The others declining response rates. It’s harder to reach respondents, that’s true for households and businesses. …

I worked in [data] modernization for at least 15 years, and there’s a lot of things that we can do to shore up the survey-based collection system that we have from the 20th century. The most promising avenue for modernizing statistics is, like, what we often refer to as a blended data approach, where you ask the respondents the things that are otherwise really hard to collect. So unemployment is probably a key one here. So unemployment, you really have to go to a household and find out what they were do, like were they working? If they weren’t working, were they seeking work? There’s no administrative solution to this problem because lots of people who are sitting at home not working are doing other things. They’re taking care of small children. They’re taking care of elderly relatives there in school. And so you don’t really know why people are out of the labor market unless you ask them, and if they’re trying to get in. On the other hand, there are domains where, like wage and salary income, where we have a lot of rich administrative data, and we know that this is something respondents really don’t like providing themselves. And so you can use, like, IRS data, unemployment insurance, wage record data to help fill in and take response burden away from individuals. So you just, you have to go sort of item by item in terms of the potential for this other, like, alternative data where it can fill in.

If you were designing a job search for an economist to lead the way in actually nuts-and-bolts job of updating the federal statistical apparatus, it would be hard to do better than McEntarfer. As she points out, firing the statisticians is actually a lasting blow to the credibility of government statistics:

Ishould explain one reason I assumed this was just a threat [to fire me] and not an actual execution of a firing is because firing your chief statisticians is a shock to trust in your economic data that has real economic consequences. So it’s not something you really want to do as a rule. So I assume somebody was gonna, you know, tell him actually you don’t want to do that. … [T]he economic community immediately realized the consequences. Many, many people spoke out in the aftermath of my firing, both defending my work, but also just saying, “You do not want to do this.” Like, this is countries where they have fired their chief statisticians, Argentina, Greece, it’s not, it’s just not a good list.

Snapshots of “Cross-Border Financial Centers”

I heard a story some years back about a court case that involved following a blizzard of cross-border financial payments. Apparently, even the lawyers were having a hard time tracking, and the jury was completely at sea. But the lawyers and the judge all knew that only one thing actually mattered in the trial. Would the prosecution at some point be able to say the words “Cayman Islands” in front of the jury? If so, the jury would very likely draw an inference that “Cayman Islands finance” meant “guilt,” and thus find the defendant guilty of something-or-other. But if the defense could prevent the words “Cayman Islands” from being spoken in the courtroom,, the blizzard of confusion meant that the defendant would be found “not guilty.” Much legal maneuvering resulted.

I was reminded of the story by a few figures in a paper called “International finance through the lens of BIS statistics: offshore activity,” by  Iñaki Aldasoro, Bryan Hardy, Goetz von Peter and Philip Wooldridge (BIS Quarterly Review, March 2026, pp. 81-96). These figures are from Appendix A to that paper, credited to von Peter and Wooldridge.

For some financial instruments, the source of the funding comes from within a country and the destination of the funding is used within the country. For other financial instruments, at least some of the funding from sources external to the country and the destination of the funding used outside the country. This is called “external [financial] intermediation.” In the graph below, KY in the upper right of the figure is a country code for “Cayman Islands.”

The amount of external financial intermediation in the Cayman Islands is, as shown on the horizontal axis, about 1,000 times as large as the country’s economy. The total size of external intermediation for the Cayman Islands–a country with a national GDP roughly equal to the size of the metropolitan area of Flagstaff, Arizona, or Bangor, Maine–is about $10 trillion dollars–similar in size to the size of external financial intermediation for the entire US economy. (The alert reader will notice that the horizontal axis is rising by multiples of ten, while the vertical axis is rising by multiples of 1,000).

Thus, the Cayman Islands, along with the British Virgin Islands (VG on the figure) are prototypical examples of what are bloodlessly labeled “cross-border financial centers.” Others on the figure include Luxembourg (LU), Bermuda (BM), Jersey (JE), Guernsey (GG), the Marshall Islands (MH), and others.

The red dots are known as “cross-border financial centers,” roughly defined as those areas where external financial intermediation is more than 10 times the size of the economy. Some of these countries close to the top of the figure can be viewed as financial gateways to a larger region: for example, Ireland (IE) and Netherlands (NL) are gateways to economies of the European Union, and Hong Kong (HK) and Singapore (SG) are gateways to China. This red-dot group of countries represented less than 20% of global external finance back in 2000, but is now up around 30% of the total.

There is of course, nothing fundamentally wrong with financial capital flowing across national borders. In the upper middle of the figure, you can see the “global financial center” countries labelled with hollow circles, like the US, Germany, UK, Japan, France, Switzerland, Canada, and China. But in a spirit of gentle inquiry, it seems fair to ask what causes certain economic actors to do so very much business in the cross-border financial centers. In their article, Aldasoro, Hardy, von Peter and Wooldridge make a start at tracing where the external money comes from and goes to in these countries. Often the financial transactions involve what are called “non-bank financial institutions”–that is, companies that receive funds and loan or invest those funds, but are not subject to national or international banking regulations. The ultimate ownership of such companies (and ownership of the companies that own those companies, and so on) is often very hard to determine, which is almost certainly the point.

The US Postal Service Hits Its Debt Ceiling

The US Postal Service has been losing money every year for about two decades, and borrowing money to keep the mail running. Now it has hit the debt limit imposed by Congress. Elena Patel of the Brookings Institutiontells the US side of the story in “What’s next? The US Postal Service’s fiscal crisis: When universal service outlives its financing model” (March 13, 2026) and provides some international perspective in “Postal systems worldwide confront the same financial pressures” (March 10, 2026).

Here’s an overview of the situation. The US Postal Service has a legal monopoly on the delivery of first-class mail. The idea was that the profits from first-class mail could then provide a cross-subsidy to support universal, six-day-a-week mail delivery. But as electronic communication has soared (email and text, in particular), first-class mail has dropped by more than half in the last two decades.

Luckily for the US Service, shipping and packages are up, and also pay a lot more than delivering letters. As a result, total revenue for the US Post Office has been roughly flat. However, because the US Postal Service does not have a monopoly on package delivery, these revenues are less likely to create a profit-stream that can cross-subsidize other Post Office operations.

However, about two-thirds of total USPS spending is on labor compensation and benefits, and while revenues have been flat, total costs have edged up over time.

So what’s to be done? The simplest step is probably for Congress to let the US Postal Service borrow more money, although that of course doesn’t actually address the problem.

Congress could admit that the old model of relying on first-class mail to generate funds for universal six-day service doesn’t work any more. Thus, Congress could let the Post Office shift to, say, delivering first-class mail to everyone, but only three days per week: for example, half the country would get Monday, Wednesday, Friday delivery, while the other half would get Tuesday, Thursday, Saturday delivery. Perhaps package delivery could continue to be daily, everywhere. Or if Congress wants to keep the universal six-day service, it could pay for it with a direct appropriation of perhaps $6 billion per year.

It’s also common to point out that if you take out the cost of obligations to retirees, the US Postal Service would actually be running at break-even, or a little better. But of course, if one could just remove the cost of obligations to retirees, federal, state, and local budgets all over the country would also be a lot closer to break-even. But at least in theory, Congress could take over these retiree costs.

The same decline in first-class mail is happening everywhere. What are other high-income countries doing about it? Patel notes:

In March 2025, Denmark’s state-owned postal operator PostNord announced it would traditional nationwide letter delivery, citing a roughly 90% decline in letter volumes since 2000. … 

In July 2025, the United Kingdom’s regulator approved reforms to the universal service affecting Royal Mail, a privately owned operator, in response to declining letter volumes and sustained financial pressure. The changes preserve six-day First Class delivery but allow Second Class letters to be delivered on alternate weekdays rather than six days a week …

In September 2025, persistent losses and falling letter volumes in Canada led the federal government to instruct Canada Post to begin a structural transformation, authorizing the conversion of four million door-to-door delivery addresses to community mailboxes,

I have no easy answer for the US Postal Service. But it’s been clear for some years now that it’s longstanding business model isn’t workable.

The Wealth of Nations: What’s It all About?

The US semiquincentennial (that is, half of 500 years) will be July 4 of this year, but economists celebrated a 250th anniversary of their own on March 9, marking the original publication date of Adam Smith’s An Inquity into the Nature and Causes of the Wealth of Nations. It’s of course fundamentally impossible to sum up a truly great work that runs more than 1,000 pages (in the edition on my bookshelf) in a quick sentence or a few hundred words. Below, I collected some of my posts over the years about aspects of Adam Smith’s work: just looking at the titles gives a sense of his breadth and insight. But here’s my own radical thought about Smith’s main insight: He was reconceptualizing, what should be meant by, yes, the “wealth of nations.”

Up until Smith’s time, the wealth of a country referred, explicitly or implicitly, to the wealth of its rulers: their stores of gold, the property they owned, the land over which they ruled, the number of soldiers, and so on. Smith offered a radicially different view. Smith argued instead that the wealth of a country was embodied in the abilities and efforts of its ordinary workers and in the consumption levels of average people. Maybe this seems obvious to you? But you are, after all, living in a world shaped by Smith’s great book.

From this point of departure, Smith then dug down into what made average citizens well-off. Yes, Smith pointed out that the operation of decentralized market forces were part of a higher standard of living. Look at the real world, then and now, and it’s impossible to deny the truth of that claim. But Smith also digs down into taxes, spending, education, trade, the role of money, and many other issues. Anyone who claims that Smith was an advocate for unfettered market forces is, to put it bluntly, ignorant and wrong.

It should be possible both to acknowledge that market forces can be extraordinarily powerful and productive, and to seek a deeper understanding of why and how this might be so, and also to acknowledge that market forces have both benefits and costs. The Wealth of Nations is, like the title says, an “inquiry” into these issues. Actual readers of the Wealth of Nations have long recognized the nuance, wide-ranging nature, and openness of spirit in Smith’s discussion. To illustrate the point, here’s the closing paragraph (chopped into smaller paragraphs for readability) of an essay by Jacob Viner based on a speech given on the 150th anniversary of The Wealth of Nations (“Adam Smith and Laissez Faire,” Journal of Political Economy, April 1927, 35:2 pp. 198-232).

Adam Smith was not a doctrinaire advocate of laissez faire. He saw a wide and elastic range of activity for government, and he was prepared to extend it even farther if government, by improving its standards of competence, honesty, and public spirit, showed itself entitled to wider responsibilities. He attributed great capacity to serve the general welfare to individual initiative applied in competitive ways to promote individual ends. … He helped greatly to free England from the bonds of a set of regulatory measures which had always been ill advised and based on fallacious economic notions, but he did not foresee that England would soon need a new set of regulations to protect her laboring masses against new, and to them dangerous, methods of industrial organization and industrial technique. Smith was endowed with more than the ordinary allotment of common sense, but he was not a prophet. But even in his own day, when it was not so easy to see, Smith saw that self-interest and competition were sometimes treacherous to the public interest they were supposed to serve, and he was prepared to have government exercise some measure of control over them where the need could be shown and the competence of government for the task demonstrated.

His sympathy with the humble and the lowly, with the farmer and the laborer, was made plain for all to see. He had not succeeded in completely freeing himself from mercantilistic delusions, and he had his own peculiar doctrinal and class prejudices. But his prejudices, such as they were, were against the powerful and the grasping, and it was the interests of the general masses that he wished above all to promote, in an age when even philosophers rarely condescended to deal sympathetically with their needs. He had little trust in the competence or good faith of government. He knew who controlled it, and whose purposes they tried to serve, though against the local magistrate his indictment was probably unduly harsh. He saw, nevertheless, that it was necessary, in the absence of a better instrument, to rely upon government for the performance of many tasks which individuals as such would not do, or could not do, or could do only badly.

He did not believe that laissez faire was always good, or always bad. It depended on circumstances; and as best he could, Adam Smith took into account all of the circumstances he could find. In these days of contending schools, each of them with the deep, though momentary, conviction that it, and it alone, knows the one and only path to economic truth, how refreshing it is to return to the Wealth of Nations with its eclecticism, its good temper, its common sense, and its willingness to grant that those who saw things differently from itself were only partly wrong.

Here are some of my previous posts over the years about aspects of Adam Smith’s work, looking at both The Wealth of Nations as well as his 1759 book which established his reputation at the time, The Theory of Moral Sentiments. The highest compliment I can pay is not that a work is correct, but that it is endlessly interesting, and Smith’s work reaches that level.

Want more? Here are links to two articles from the Journal of Economic Perspectives, where I work as Managing Editor, on Smithian topics:

StatGPT: The Dangers of Asking AI about Statistics

Asking a question of the generative AI tools often produces a reasonable first draft of the desired output. Sure, it may have some inaccuracies and even hallucinations, but first drafts are always imperfect. It’s up to the author to fix them up. (You may say that your personal first drafts don’t hallucinate. Really? You’ve never produced a first draft where you are sure that you remembered a certain article or quotation or statistic, but when you checked it out while revising the draft, you found your memory was just flatly incorrect?) As someone who has worked for many years as an editor, I like to say that the new AI tools have devalued the ability to produce an OK first draft–because that can now be done so easily in many contexts–but upvalued the ability to add value by editing.

But if you ask AI about specific statistics, this “reasonable first draft” standard no longer applies, because you don’t want a “reasonable first draft” of statistics–you want the actual data from the relevant official source. If you don’t ask in a careful way, the AI tool will not give you what you want.

James Tebrake, Bachir Boukherouaa, Jeff Danforth, and Niva Harikrishnan describe the problem and offer some solutions in “StatGPT: AI for Official Statistics” (International Monetary Fund, March 9, 2026,). The authors carry out an experiment with ChatGPT and other AI tools, asking about basic data on annual economic growth rates in recent years for seven leading economies. They describe their process:

The prompt `Can you generate a table of economic growth rates for the G7 countries taking the data from the latest issue of the IMF’s World Economic Outlook. Can you provide data for 2018 to 2025. Can you provide the output in a CSV file.’ was entered 10 times in the same conversation—10 times in 10 different conversations, and 5 times in the same conversation with a copy of the latest World Economic Outlook loaded in memory (total of 25 prompts).

How accurate are the results? For what seems like a fairly basic query, they find:

Overall, ChatGPT provided a correct response 34 percent of the time when the prompts were entered into the same conversation. The level of accuracy declined to 17 percent when the request was made using unique conversations. When the latest publication of the World Economic Outlook was loaded into ChatGPT, the level of accuracy fell to 14 percent.

The authors offer two solutions. For the short-term, they describe how to use a series of prompts so that the AI tool will start with a broader perspective, focus in on a specific dataset, and then on specific data from that dataset, and in that way retrieve the specific data you want.

For the longer-term, they also dream of building “a true Global Trusted Data Commons—a comprehensive, AI-ready index of official statistics data …” Unsurprisingly to those who know me, I love this idea. Like it or not, lots of people are going to seek an understanding of economic statistics through AI tools. Creating an environment in which these tools will actually work is a public good I can support.