In Hume’s spirit, I will attempt to serve as an ambassador from my world of economics, and help in “finding topics of conversation fit for the entertainment of rational creatures.”
Economic growth and productivity growth across the nations of the European Union has been lagging the US economy. What are the reasons and what might be done? A group of essays in the June 2025 issue of Finance & Development offers some insights. A common theme is that EU economic integration has not proceeded as planned. As a result, EU firms are selling into smaller national markets, rather than a continent-wide market, and their incentives to attract finance and to invest in economies of scale and new technologies are accordingly reduced.
The EU has made significant progress freeing up trade between its member states, but plenty of obstacles remain. High trade barriers within Europe are equivalent to an ad valorem cost of 44 percent for manufactured goods and 110 percent for services, IMF research shows (2024). These costs are borne by EU consumers and companies in the form of less competition, higher prices, and lower productivity.
The EU is also a long way from capital market integration, with cross-border flows frustrated by persistent fragmentation along national lines. The total market capitalization of the bloc’s stock exchanges was about $12 trillion in 2024, or 60 percent of the GDP of the participating countries. By comparison, the two largest stock exchanges in the US had a combined market capitalization of $60 trillion, or over 200 percent of domestic GDP. Limited EU-level harmonization in important areas, such as securities law, hampers growth by preventing capital from flowing to where it’s most productive.
This is one reason Europe has fallen behind in the adoption of productivity-enhancing technologies and its productivity levels are low. Today, the EU’s total factor productivity is about 20 percent below the US level. Lower productivity means lower incomes. Even in the EU’s largest advanced economies, per capita income is about 30 percent lower than the US average (see Chart 1).
Kammer points out:
Not only do Europe’s leading companies lag their US competitors, but they are falling further behind over time. This is true across all sectors, but especially for tech. While the productivity of US-listed tech firms has increased by about 40 percent over the past two decades, European tech firms have seen almost no improvement. One reason could be that US firms are simply trying harder: They have tripled their research and development spending to 12 percent of sales revenue, three times European companies’ ratio, which has languished at an average of 4 percent in recent decades.
The future would look brighter if Europe could hope for young high-growth firms to reduce the innovation and productivity deficit. Alas, the EU has few such companies. And they have a substantially smaller economic footprint than those in the US, where younger firms account for a far larger share of employment. In other words, the EU has too many small, old, and low-growth companies. About a fifth of European employees work in microfirms with 10 people or fewer, about double the US figure. And while the average European firm that has been in business 25 years or more employs about 10 workers, comparable US companies employ 70 (Chart 2).
Issues for the EU may be especially acute for young tech companies. As Kammer points out, banks are typically the primary source of capital for EU companies, and banks typically want to lend to companies with collateral–not a company based on a few patents and an idea: “[T]here is a troubling trend of innovative European firms taking their talents to more dynamic markets elsewhere, with future “unicorn” companies valued at more than $1 billion leaving the EU for the US at a rate that is 120 times faster than the other way around, according to research by Ricardo Reis, of the London School of Economics.”
“The US innovates, China replicates, Europe regulates” is how critics summarize the continent’s approach to innovation. Exhibit A of the European Union’s regulatory overreach is the now infamous Artificial Intelligence Act, which governs AI—even though the region has not yet produced a single major player. Productivity in US technology firms has surged nearly 40 percent since 2005 while stagnating among European companies, according to IMF research. US research and development spending as a share of sales is more than double what it is in Europe. No European company ranks among the 10 largest tech companies by market share.
Back in November 2022, Dean Karlan took the job as the first “chief economist” for USAID. In that position, he had a staff of about 30 whose task was to figure out the benefits and costs of different aid programs, with the goal over time of refocusing aid on problems and situations where the payoff was highest. In February 2025, Karlan resigned his position as USAID when he felt that political opposition made it impossible to do the job for which he had signed on.
There had never been an Office of the Chief Economist before. In a sense, I was running a startup, within a 13,000-employee agency that had fairly baked-in, decentralized processes for doing things. … [T]he reality is, we were running a consulting unit within USAID, trying to advise others on how to use evidence more effectively in order to maximize impact for every dollar spent. We were able to make some institutional changes, focused on basically a two-pronged strategy. One, what are the institutional enablers — the rules and the processes for how things get done — that are changeable? And two, let’s get our hands dirty working with the budget holders who say, `I would love to use the evidence that’s out there, please help guide us to be more effective with what we’re doing.’ There were a lot of willing and eager people within USAID.
On the challenge of Congressional earmarking
[T]he number that I heard is that something in the ballpark of 150-170% of USAID funds were earmarked. … Congress double-dips, in a sense: we have two different demands. You must spend money on these two things. If the same dollar can satisfy both, that was completely legitimate. There was no hiding of that fact. It’s all public record, and it all comes from congressional acts that create these earmarks. … There’s an earmark for Development Innovation Ventures (DIV) to do research, and an earmark for education. If DIV is going to fund an evaluation of something in the education space, there’s a possibility that that can satisfy a dual earmark requirement. That’s the kind of thing that would happen. One is an earmark for a process: “Do really careful, rigorous evaluations of interventions, so that we learn more about what works and what doesn’t.” And another is, “Here’s money that has to be spent on education.” That would be an example of a double dip on an earmark.
How the Department of Government Efficiency (DOGE) intervention operated
There was not really any looking at any of the impact of anything. That was never in the cards. There was a 90-day review that was supposed to be done, but there were no questions asked, there was no data being collected. There was nothing whatsoever being looked at that had anything to do with, “Was this award actually accomplishing what it set out to accomplish?” There was no process in which they made those kinds of evaluations on what’s actually working. You can see this very clearly when you think about what their bean counter was at DOGE: the spending that they cut. … Throughout the entire government, that bean counter never once said, “benefits foregone.” It was always just “lowered spending.” Some of that probably did actually have a net loss, maybe it was $100 million spent on something that only created $10 million of benefits to Americans. That’s a $90 million gain. But it was recorded as $100 million. And the point is, they never once looked at what benefits were being generated from the spending. What was being asked, within USAID, had nothing to do with what was actually being accomplished by any of the money that was being spent. It was never even asked.
Francisco Flores also interviewed Karlan for the Economics that Really Matters (ETRM) website in “ETRM Interview Series – Dean Karlan” focused on “the future of research in development economics, and for their advice to young researchers.” Here’s Karlan on the topic of broad political support for foreign aid:
[H]onestly, I’m not convinced that a lack of evidence is the main reason [foreign] aid isn’t more supported. It’s a bit of an oversimplification to say, “People don’t see the benefits, so they don’t support it.” There are many things governments do that only benefit a small segment of the population—like specific research initiatives or industry subsidies—and yet we still do them. If our standard were that every policy has to directly benefit 51% of people to be justified, we’d hardly get anything done. So, I don’t think that’s a fair criticism of foreign aid.
Also, the best evidence we can provide is about whether aid is effective—not whether it tangibly benefits, say, a middle-income family in Kansas. Sometimes there are material connections—like if USAID buys wheat from Kansas and a local farmer benefits—but those are exceptions. Most aid programs don’t have a direct economic payoff for Americans. Instead, the benefit is about soft power, about global leadership, and most importantly, about doing the right thing.
And that moral stand—that’s something a lot of Americans already live by. Most Americans donate to charity. Most care about others. We talk about ourselves as a generous, giving nation. So what’s wrong with living up to that identity as a country? Why shouldn’t our foreign policy reflect those values? … So I don’t think we need to show a financial return on foreign aid to justify it. And I don’t think a lack of direct benefit to Americans is the reason it sometimes loses support.
Economists have been thinking for a long time about the operation of buying and selling in markets. However, they have traditionally spent less time studying what happens inside a firm–a setting in which forces of supply and demand are replaced by managerial decision-making. Anyone who has had both a good boss and bad boss knows that it makes a difference, but how and why? Alan Benson and Kathryn Shaw tackle the research on this question in “What Do Managers Do? An Economist’s Perspective” (Annual Review of Economics, 2025, 17: 635-664). They write:
Economic activity requires motivating and coordinating individuals to work toward a common goal. These aims are the purview of managers. What, however, do managers actually do? We outline three defining principles of economic research on managers—technological determinism, skill distinction, and managerial self-interest—and relate them to the set of skills reported by managers on LinkedIn. We highlight “managers of people” and “managers of projects” as a useful distinction for categorizing theoretical, empirical, and descriptive accounts of managers. In light of our three principles, we review research on how managers can create value—namely, by hiring, retaining, training, monitoring, evaluating, allocating, and supervising. We propose that managers apply these skills in different proportions depending on the production technology in which they are embedded …
Empirical studies in this literature often involve finding data from within companies. For example, consider a company with a group of middle managers, all at the same level in the hierarchy, who oversee groups of workers. Moreover, say that workers sometimes are shifted from one manager to another, as business needs evolve. It may become apparent that most workers perform with higher productivity under some managers than others. What are some of the main themes that emerge from this research?
In hiring decisions, the evidence suggest that few managers are good at screening potential workers. A fairly robust literature finds that more productive workers are hired by a process that involves some mixture of highly structured interviews (so that answers are more comparable across applicants) or specific testing, or by direct observation of the person doing the job, when that is possible. But managers do a better job of hiring if the have incentives to overcome the biases that lead them to prefer hiring from their own friend-groups, social groups, or ethnic groups.
In retention of existing workers: “Perhaps the clearest evidence linking people skills and retention is provided by Hoffman & Tadelis (2021). Using data from a large high-tech firm, they find that survey-measured people management skills are highly correlated with greater subordinate retention: Replacing a manager at the 10th percentile in measured people skills with one at the 90th percentile corresponds to a 60% reduction in overall turnover and to declines in turnover among workers estimated to be high performers.” Retention is often easier when a worker and manager share a characteristic: for example, female managers are generally better at retaining female workers. There is also evidence that managers who are encouraged to focus on retention can often improve on this dimension.
In training and mentoring: “Sandvik et al. (2020) provide one of the most comprehensive recent field studies of how managers create value through training. They examine sales agents whose productivity may be tracked by revenue per call. Managers are responsible for improving sales agents’ performance through formal training, probationary screening, and ongoing feedback. Importantly, managers can encourage development by managing workplace knowledge flows, including by setting up policies that encourage peer learning from the best performers.” When it comes to mentoring, the approach that produces more productiv workers seems to be regular, mandatory, and broad-based mentoring, rather than selecting a smaller number of people for mentoring.
In the area of motivation: “For instance, Lazear et al. (2015) estimate a two-way fixed effect model in the context of supervisors of workers doing routine tasks. They find that the difference in productivity under a 90th-percentile manager and a 10th-percentile manager is equivalent to the productivity from an additional worker. Benson et al. (2019) estimate manager value added from the manager fixed effects in a regression with salesperson productivity. They find large differences in the productivity of sales workers under different managers: A worker under a 75th-percentile manager has nearly five times the sales of one under a 25th-percentile manager, which is approximately half the raw sales gap between workers at these quartiles.” Some of these differences in managerial ability seem traceable to differences in the “prosocial” skills of managers.
In the area of evaluating and monitoring, the research takes a certain need to limit cheating and shirking for granted, but focuses more broadly on how a manager can improve productivity fair process of evaluation can help in “providing workers with greater autonomy, enablement, and incentives for reaching prespecified outcomes, except in situations where a manager’s monitoring and supervision are required to check moral hazard Much of what economists refer to as monitoring also falls under what practitioners refer to as performance management, highlighting contemporary organizations’ emphasis on using evaluations for the dual purpose of evaluation and professional development (i.e., identifying and training high-ability workers).”
In the area of allocating, economists are familiar with the idea of “good matches” between workers and jobs that happen through markets, but managers often have the challenge of matching existing worker within the firm to the tasks that need to be done. For example: “Using data featuring manager job rotations at a large multinational company, Minni (2023) finds that good managers, defined as those revealed to be good by quick subsequent promotion, more actively move their subordinates both laterally and vertically and enhance their productivity and future advancement. Adhvaryu et al. (2022a), using data from an Indian garment plant, find that the most attentive managers enhance productivity by reassigning workers in response to particulate matter pollution.”
Economists probably still focus more on buying and selling within markets than on what happens inside firms, but digging into the inner workings of firms is becoming more common. This makes sense. Herbert Simon (Nobel 1978) wrote an essay on “Organizations and Markets” for the Journal of Economic Perspectives (where I work as Managing Editor) back in 1991, argued for the importance of looking inside the organizations of firms with a (to me) memorable metaphor. Simon wrote:
A mythical visitor from Mars, not having been apprised of the centrality of markets and contracts, might find the new institutional economics rather astonishing. Suppose that it (the visitor—I’ll avoid the question of its sex) approaches the Earth from space, equipped with a telescope that reveals social structures. The firms reveal themselves, say, as solid green areas with faint interior contours marking out divisions and departments. Market transactions show as red lines connecting firms, forming a network in the spaces between them. Within firms (and perhaps even between them) the approaching visitor also sees pale blue lines, the lines of authority connecting bosses with various levels of workers. As our visitor looked more carefully at the scene beneath, it might see one of the green masses divide, as a firm divested itself of one of its divisions. Or it might see one green object gobble up another. At this distance, the departing golden parachutes would probably not be visible.
No matter whether our visitor approached the United States or the Soviet Union, urban China or the European Community, the greater part of the space below it would be within the green areas, for almost all of the inhabitants would be employees, hence inside the firm boundaries. Organizations would be the dominant feature of the landscape. A message sent back home, describing the scene, would speak of “large green areas interconnected by red lines.” It would not likely speak of “a network of red lines connecting green spots.” …
When our visitor came to know that the green masses were organizations and the red lines connecting them were market transactions, it might be surprised to hear the structure called a market economy. “Wouldn’t ‘organizational economy’ be the more appropriate term?” it might ask. The choice of name may matter a great deal. The name can affect the order in which we describe its institutions, and the order of description can affect the theory. In particular, it may strongly affect our choice of the variables that are important enough to be included in a first-order theory of the phenomena.
In the old days of publishing the Journal of Economic Perspectives on paper–like last year–I didn’t worry about publishing an issue in August. Even if potentially interested readers were on vacation or away from the office, I figured that the physical paper copy of the journal would hang around in their mailbox, or perhaps in the piles of paper littering their desktop, and readers would eventually discover the issue.
But in 2025, the JEP and the other journals published by the American Economic Association shifted to online-only publication (although you can purchase a paper copy if you wish). Yes, the American Economic Association announced the Summer 2025 issue of the JEP back in mid-August, and yes, I announced it on this website. But were there potentially interested readers who missed or skimmed over those late summertime announcements, but who are now back at the grindstone of a new academic year? I don’t know, but there seemed little harm in sending out a reminder that the Summer 2025 issue of the JEP (where I work as Managing Editor) is freely available online–as it has been for more than a decade now. You can download individual articles or entire issues, and it is available in various e-reader formats, too. To entice you to take a look, here’s the Table of Contents.
Bitcoin and cryptocurrencies in general are no longer the bright shiny new thing. The Journal of Economic Perspectives, where I work as Managing Editor, was describing and discussing the crypto world a decade ago. What happened to all those predictions that Bitcoin would rapidly displace existing currencies? In “Crypto, tokenisation, and the future of payments.” Stephen Cecchetti and Kermit L. Schoenholtz discuss what has held crypto back, and argue that the momentum for “stablecoins” is unlikely to improve upon the possibility of “tokenization” run by ginormous global financial firms like JP Morgan and Black Rock (CEPR Policy Insights 146, August 2025).
Why have cryptocurrencies like Bitcoin not taken off as their enthusiasts predicted? Cecchetti and Schoenholtz write:
By some estimates, there are over 20,000 cryptoassets – instruments whose ownership is recorded on a ledger based on some form of cryptography (FCA 2023). At this writing, these have a cumulative value of about $4 trillion, with Bitcoin accounting for roughly 60% of the total. While it functions as a store of value, outside of the crypto world Bitcoin is still neither a common means of exchange nor a popular unit of account. … When historians look back at the decades following Bitcoin’s introduction, they will ask: “Why has crypto not ‘taken off’ in the way its creators and early backers hoped?” We offer three tentative answers.
First, despite the hype about the speed and efficiency of digital transactions, it turns out that transfers of Bitcoin and Ether – the leading cryptoassets – remain slow and costly. On 14 August 2025, it took an average of more than 15 minutes to confirm a Bitcoin transaction. And that time varies widely: on several days in September 2024, it took more than 2,000 minutes! This variation makes settlement and finality difficult to predict. Small retail payments are especially costly (say, 5% for a payment of $20) in part because even the limited number of retailers who are willing to accept Bitcoin in payment typically do not wish to hold it.
Second, the competition from traditional finance is intense, helping to lower costs and speed up payments. Consider, for example, the world of cross-border remittances. Critics argue that costs in the traditional sector are stubbornly high. In fact, for a standard-sized remittance, the average cost faced by a savvy consumer has halved over less than a decade to less than 3% (World Bank 2024, Figure 3). And there is strong evidence that further gains are coming. Indeed, for a range of recipient countries, Figure 3 shows how much less than the average cost (black bars) the cheapest provider (red bars) charges. The message is that as consumers gain familiarity with what is available, the benefits of competition among traditional providers are likely to intensify, further lowering average costs.
Third, while both governments and private groups are expanding their efforts to track illicit crypto payments, the reputational damage from criminal activity lingers. In addition, spectacular failures in the past – such as the collapse of the FTX exchange (Cecchetti and Schoenholtz 2022) – encourage consumer doubts about the reliability of crypto custodians. Similarly, dire headlines about crypto-related kidnapping and torture probably deter potential crypto users who do not trust custodians and instead would consider owning a digital wallet (Horvath 2025).
So what is taking off? The answer seems to be “payments stablecoins.” The key difference is that the value of a cryptocurrency like Bitcoin isn’t tied to anything else: indeed, some of those who buy Bitcoin are hoping for its price to rise. In contrast, the value of a stablecoin is based on the ownership of an underlying asset, like US Treasury bonds or a mutual fund that invests in high-quality bonds. Thus, the value of stablecoins is neither going to rise or fall by much–which makes them useful for transactions. Cecchetyi and Schoenholtz write:
‘[P]ayments stablecoins’ … are reserve-backed tokens with value pegged to government-issued currency, predominantly the US dollar. Smart contracts on the Ethereum blockchain control the two largest stablecoins, Tether’s USDT and Circle’s USDC. These originated as a stable-valued means of payment for people trading inside the crypto world. They quickly turned into the primary bridge between the traditional financial system and the crypto world, allowing investors and speculators to shift funds between traditional financial instruments (equity, bonds, bank balances, and the like) and crypto assets (Bitcoin, Ether, Solana, etc.). At this writing, this remains stablecoins’ primary use.
Ironically, stablecoin issuers (and some other promoters of crypto) are now strong advocates of government regulation. Their goal is to legitimise crypto in ways that can draw participants from the traditional financial system. Put slightly differently, the dream of a fully decentralised system operating without intermediaries or governments has given way to a far less radical vision that requires government oversight and the legal enforcement of property rights.
Thus, the Guiding and Establishing National Innovation for U.S. Stablecoins Act, for obvious reasons usually called the GENIUS Act, was signed into law by President Trump in June. It creates a short list of safe assets in which stablecoins are allowed to invest. It requires that stablecoins do not pay interest, although they can offer “rewards” to holders of stablecoins that look at lot like interest. It requires that stablecoins comply with rules like know your customer (KYC), anti-money laundering (AML) and anti-terrorist financing (ATF) standards–which is to say that they aren’t very anonymous.
But again, stablecoins are basically a halfway house for investors to move money between cryptocurrencies like Bitcoin and more conventional financial assets. They aren’t going to rise and fall in value, and they aren’t a useful method for carrying out other everyday transactions, either. So their ultimate usefulness seems limited.
Thus, Cecchetti and Schoenholtz point to the new kid on the block for financial technology: “tokenised deposits and tokenised money market funds.” In particular, they discuss “JPMorganChase’s tokenised deposit (JPMD) and BlackRock’s tokenised money market fund (BUIDL).” The first is still experimental; the second has just started. The idea here is that these products will not just be available to those with accounts at JPMorganChase and BlackRock, but any institutional (or approved) customer will be able to use these products to make deposits/withdrawals within the financial ecosystem of these giant firms.
Outside of China, JPMorganChase is the largest global bank (with assets of roughly $4 trillion) and BlackRock is the largest global asset manager (with assets under management of more than $12 trillion). When these gigantic institutions offer customers a product, they do it inside an ecosystem with tens of millions of existing customers and a wide array of complementary products and services. In this context, as the number of customers using JPMD or BUIDL increases, the internal (‘on us’) market will grow more liquid, with the potential for instant settlement both within and across borders at minimal cost. …
These tokenised assets differ from existing deposit accounts and money market funds in two important ways. First, they clear and settle around the clock. And second, the plan is that they will allow for programmable settlement and automated functions through smart contracts. They also can trade either on a proprietary centralised ledger or, using smart programming to provide access only to approved clients, on a public, distributed ledger. … Imagine, for example, that a few internationally active systemic banks decide to accept each other’s tokenised deposits instantly at par. In effect, they would be implementing a digital version of the 19th century US cheque clearinghouses that assured the expeditious settlement of most payments, imposed credit standards, and even acted as private lenders of last resort (Bernanke 2011). Such a 21st century clearinghouse would be a too-big-too-fail juggernaut.
In short, the financial technology revolution has come a long way since the Bitcoin enthusiasts of 10-15 years ago imagined circumventing national currencies and government regulations. The next iteration may be that a central method of settling everyday payments starts to happen with “tokenized” deposits and money market accounts run by financial megacorporations.
Compared to workers in most other high-income countries, Americans tend to work more hours per year. Here’s a figure from the OECD, which is based on taking the total number of hours worked in an economy and dividing it by the number of workers for the most recent year available. Because different countries will measure categories like “hours” and “workers” somewhat differently, the results should not be taken as precise.
But look at the size of the gaps! An American worker is at 1,811 hours/year, while a German worker is at 1,340 hours/year. If one thinks in terms of 40-hour work weeks, the German worker is working about 12 weeks per year less.
[F]or many decades, the United States was a place where people worked less. Before 1900, American hours were lower than in a number of European countries, such as Belgium, France, Germany, the Netherlands, and Italy. The U.S. was first to go to the five-day week. In 1950, Germany, France, the U.K., Italy, and Spain all had longer hours. Even through the 1960s, work schedules in Europe exceeded those in the U.S. Then the two regions took different paths. U.S. hours stagnated and rose. Europeans continued a century-long trajectory of reducing work time.
This divergence seemed to start happening in the 1970s–which suggests that it is not the result of some deep-seated cultural difference going back a century or more, but instead resulted from more recent political and social choices. Schor suggests several underlying factors that might lead the American labor market toward more hours per worker.
As one example, many full-time workers in the US labor market get their health insurance through their employer. Most economists believe that although the employer writes the check to pay the cost, the economic value of health insurance is actually paid by workers in the form of wages that are lower than they would otherwise have been. Schor writes:
It [employer-paid health insurance] functions like a tax on employment, giving employers an incentive to hire fewer people for more hours. This was an accidental and unfortunate pairing; during World War II, employers began offering health insurance to attract workers because wages were controlled by the government to keep wartime inflation at bay. Little did anyone expect this would distort the labor market eighty years later.
Another reason, Schor argues, is that many US jobs are paid a salary, rather than a hourly wage. Of course, salaried workers do not receive additional pay if they work additional hours–and so employers have an incentive to push such workers for additional hours.
As Schor points out, the overall question is whether increases in productivity translate into higher wages or fewer hours worked. Through a variety of mechanisms like higher levels of unionization, European countries in the last half-century have generally used higher productivity to mean fewer hours worked, while the US has generally used higher productivity to mean higher wages. Schor writes:
In recent decades, digitization has transformed work in many occupations and industries, but in the U.S. hours haven’t fallen. I’ve argued that’s due to biases in the economy that have operated against hours reductions. Europe has some of these biases, but stronger unions and welfare states and a more equal income distribution have reduced those pressures, so European countries have continued to translate productivity growth into free time. Since 1973, I’ve calculated that the U.S. has taken less than 8 percent of its increased productivity to reduce hours, while western European countries have taken much more—generally three to four times that amount.
Of course, there are tradeoffs for a society that makes choices to take productivity gains in the form of leisure, rather than in the form of increased income. Schor advocates for a gradual move to a four-day work week. Whether one agrees with that goal or not, her essay is a reminder that, often without any explicit consideration of the range of tradeoffs between leisure and income, political and social arrangements can strongly affect this choice over a few decades.
Both the budgets and methods of US statistical agencies like the Bureau of Labor Statistics and the Bureau of Economic Analysis have come under political challenge. this double-edged attack raises a question of sincerity: If the real goal is to enable these agencies to update their methods for the information age–for example, to rely less on survey-based instruments and more on administrative data collected for other purposes–then this policy requires an expansion of their budgets. Given the budgets of these statistical agencies are barely a ripple in the ocean of federal spending, raising their budgets would make no perceptible difference in the long-run trajectory of federal spending, deficits, and debt.
On the other hand, if the budgets and staffing of US statistical agencies are to be continually cut, which has been the pattern over the last decade or so and even more ferociously in the present, then expressing a concern that these agencies should update their methods is just empty talk. It gives rise to a reasonable concern that the objection is not to their statistical methods: indeed, one suspects many of those complaining about quality of the output of these agencies don’t actually know the mixture of methods used, and why the result is a combination of early and revised estimates. Intead, the critics tend to praise the economic numbers they like, while claiming that all the numbers they don’t like are just political bias, without any recognition that all the numbers are produced using common methods.
The U.S. federal statistical system continues to produce timely and high-quality economic indicators, but it is built on a foundation that was largely conceptualized and developed in the mid-20th century. This system remains heavily survey-centric, relying on a mix of high- and low-frequency surveys of households and businesses. With the digitization of nearly all aspects of modern life, administrative records and private-sector digital data are now more accessible than ever. …
At the same time, the current system is under increasing strain and requires fundamental re-engineering. Survey response rates have been steadily declining, a trend that accelerated during the pandemic. Reaching households by phone has become more difficult, and many businesses find that survey instruments — even online forms — do not align with their internal information systems. In a telling development, the U.K.’s Office for National Statistics (ONS) recently suspended (temporarily) the publication of unemployment estimates based on household surveys due to critically low response rates.
Beyond operational challenges, the accelerated pace of economic change increasingly challenges the capacity of current statistical systems to measure innovation and productivity growth effectively. Accounting for the effects of product turnover and quality change in estimates of inflation, real output, and productivity has become increasingly difficult under the prevailing survey-based framework.
Federal statistical agencies are acutely aware of these challenges and the need to modernize. However, the imperative to maintain the flow of official statistics along with the lack of investment and limited resources overall have only permitted incremental steps to modernize. The time has come to invest in a 21st-century statistical system that fully harnesses the potential of the digital economy. Such a system would deliver more accurate, timely, and detailed data while reducing the reporting burden on households and businesses. During the transition, it will be essential to maintain continuity and comparability; for example, legacy and modern systems will need to operate in parallel for a period of time. In addition, a modern system will likely blend survey, administrative, and private-sector data. Investing now is critical to building a future-ready infrastructure for economic measurement.
As one example that Haltiwanger points out, the data on measures of inflation, like the Consumer Price Index, has traditionally been collected by hundreds of actual “shoppers” going to actual stores all over the country every month and recording the prices for a specific selection of items. But as barcodes have become commonplace, it’s now becoming possible to collect barcode data on goods with specific characteristics, along with data on prices and quantities sold of those goods. But redesigning the measure of inflation based on this data isn’t a simple task–and ultimately may not be a cheaper approach, either.
In addition, if the government statistical agencies are going to rely more on data collected by private firms–like actual barcode data from actual consumer purchases stores–there will be questions about making sure that privacy is protected and the data remains anonymized. Again, this seems quite possible, but adds a level of complexity and cost.
For politicians, all that matters in government statistics is the final number that pops out of the calculation. But for statisticians and economists, what matter is spelling out a systematic method that can be used over time to produce comparable results. In addition, a systematic method can be understood, and criticized, and can evolve over time. From this perspective, the actual numbers that emerge at the end of the month or the quarter are less important than using a systematic and well-specified approach to estimating the number.
The Securities Industry and Financial Markets Association, typically referred to as SIFMA, is trade association: that is, it’s an organization made up of investment banks, asset managers, brokers, and others. Among other rule-setting, lobbying, and public education missions, it publishes an annual databook: this year, the SIFMA Capital Markets Handbook 2025 (July 2025). Here are a few charts that help to convey the size and structure of US capital markets in the global economy.
The two top panels of this figure show global fixed income markets–that is, bonds–for 2024 (on the left) and 2014 (on the right). The US has by far the biggest bond markets in the world, and had about 40% of the global market both in 2014 and 2024. However, it’s interesting to note that the share of global fixed income markets based in China has expanded dramatically in the last 10 years, from 7% to 17.3%. This explosion in debt in China is one of the problems plaguing China’s economy: the borrowing was used to boost measures of economic growth over the last decade, but a number of those building and development projects didn’t work out all that well, and now the debt payments are coming due.
The bottom two panels of the figure show the division of global equity markets–that is, stock markets– in 2014 and 2024. The remarkable fact here is that US stock markets had 37.8% of global market capitalization in 2014, but by 2024, this had risen to essentially half of all global stock market capitalization. American investors have become used to the idea of rising stock markets: the rest of the world, not so much.
But these figures raise a question: if the US is dominating the global bond and stock markets, how do firms in the rest of the world raise money when they need it? The answer is “banks.” To put it another way, US firms are much more likely to be affected by the judgements of outside investors, because they can see the value that these outside investors place on the company in stock market and when issuing new bonds. In the rest of the world, it is more common to have “bank-centered” finance for companies, in which a company has a long-standing relationship with one or a few banks as its way of obtaining capital.
The top panel shows financing for non-financial corporations in the US and the rest of the world. The US and the UK have a more “equity-centered” financial system, while the EU and Japan have a more “bank-centered” system. In China, firms are clearly very dependent on bank loans–mostly from state-owned banks.
The bottom panel makes a similar point focuse just on debt financing. You can see that in the US, when corporations borrow, they do so mainly by issuing bonds. In the rest of the world, when corporations borrow, they do so mostly by taking out bank loans.
There’s a long and heated debate over whether it is “better” for firms to receive their debt financing from a bank that knows the firm well (and may even own stock in the firm) or to rely on issuing bonds to investors in the market. As an American, I’m partial to the bond markets, but clearly, either approach can be functional, as long as the riskiness of loans is properly evaluated and priced accordingly.
In the long run, the standard of living for people in an economy rises as the amount of output produced per hour of work–that is, productivity–rises. There’s reason for concern when productivity in an industry falls for a sustained period of time. In partiuclar, productivity seems to be falling in construction of housing, at the same time that there is widespead public concern that the price of housing is becoming less affordable to those with mid-level incomes over time.
Chen Yeh of the Richmond Fed lays out some basic fact in “Five Decades of Decline: U.S. Construction Sector Productivity” (Economic Brief 25-31, August 2025). The dark solid line shows value-added per worker in the US economy since 1948 (with the 1949 level arbitrarily set to equal 1.0). The solid green line shows value-added per worker in the construction sector alone. Notice that value added per worker in the US economy as a whole and in the construction sector more-or-less tracked each other in the 1950s and 1960s, but since then, value-added per worker in construction has actually dropped over the last half-century or so. (I’ll talk about the meaning of the dashed line in a moment.) Yeh writes:
Similarly, a 2023 working paper estimates that, if construction productivity had grown at even a modest 1 percent annually since 1970, annual aggregate labor productivity growth would have been roughly 0.18 percentage points higher. This difference would have resulted in current aggregate productivity — and likely income per capita — being about 10 percent higher than actual levels.
So what’s going on here? One obvious question is whether this gap is just a statistical issue, reflecting something about the way that productivity is being measured. For example, in the “value-added per worker” graph, how confident are we that the “labor” measured here includes, say, all subcontractors including undocumented immigrants? Or as another issue, the calculation of value-added per worker requires figuring out how much of the value of, say, a house came from workers vs. other costs of building a house, like materials. If these other costs are overestimated, then the value-added per worker may be underestimated (the dashed line in the first figure above shows the potential result of using one different method of adjusting for non-worker costs over time). But as Yeh points out, studies that have looked into these measurement questions more closely continue, using a variety of alternative methods, continue to find falling productivity in construction.
A plausible explanation here is that, in many urban areas, the ability to do large-scale homebuilding on nearly empty land went away some decades ago. In addition, the rules that impose limits on land use and raise costs of construction have become more stringent. One can imagine a possibility that, even as these other factor tended to push up costs of construction, innovations in materials used or methods of building could have offset these higher costs–but that has not actually happened.
In an essay in the recently released Summer 2025 issue of the Journal of Economic Perspectives, Brian Potter and Chad Syverson” consider the relationship between “Building Costs and House Prices” using US data over a time period reaching back more than a half-century. They find that there has never been a close overall correlation when looking at city-level data between movements of building costs and housing prices. In recent decades, for example, the run-up in housing prices in urban coastal cities from California to New York has not been caused by a corresponding rise in local costs of building. But they also point out that in a number of other cities–Atlanta, Chicago, Detroit, Houston, Minneapolis, and others–the long-term rise in housing prices over the decades has actually corresponded relatively closely to the rise in local building costs.
Finding ways to encourage productivity gains in housing could come in various ways. but one overall step would be to think about ways that builders could take greater advantage of economies of scale, both in allowing and encouraging the building of larger housing projects where that is feasible, and also in allowing and encouraging the use of new technologies in homebuilding (flexible but modular housing?) that could help to bring down costs.
Scott Wolla of the St. Louis Fed interviews Gary Hoover on his thoughts about teaching economics (“Gary Hoover: Teaching with Purpose,” July 1, 2025, transcript available). Here are a few excerpts, but there’s more in the full version:
How he got interested in economics
I’m a student who is in high school, and I’m watching my mother. At times my mother would have up to three jobs that she would work, and she’d work very hard at those jobs. Her mantra to me was: “If you just work hard, we’re going to get ahead.” But that didn’t seem to jive with what I was seeing. I mean, she was working very hard, but we never seemed to be getting that far ahead. I was told that by working hard, this would make one quite financially successful. But if that were the case, then I believed that my mother should be some type of millionaire, maybe even a billionaire, if it was just hard work that was necessary.
I suspected that my mother should have been rich. She wasn’t. So, I went searching for answers to that specific question: Why is it that just working hard didn’t always guarantee moving up the economic ladder? And where did I go to find answers to that question? … I initially thought about economics when I was asking these questions. So, I went to a high school teacher who was actually a social studies teacher. She said, “Look, the questions that you’re asking, about hard work and economic success, they can be addressed through economics.” … What she ends up doing is contacting the only African American economist that she knows—the late Walter Williams. … And she says, “I’ve got this young man who is interested in economics, and he’s asking these questions about inequality and race, and I don’t think that I have the right way to answer him. Could you please answer him?”
What Walter Williams does is he starts sending me his books. Interestingly, he doesn’t know me. He doesn’t know who I am. I’m some kid in a high school in Milwaukee, and he is a distinguished professor at George Mason. But he sends me his books, and we start a correspondence that lasts for the rest of his life.
So, even though later on I become an economist—I go all the way through graduate school—I still keep up with him. Later, our economic reasoning, our economic thinking, really diverged to the point that I don’t think we ended up seeing the world the same way. But I was so grateful that this person who did not know me reached out to me and started sending me books and kept writing back and forth with a high school junior—not corresponding with Nobel laureates—a high school junior who had one simple question about his mom. That was what kept me in economics.
Admiring the toolkit?
I often tell my graduate students who are getting ready to finish up their Ph.D.s: “Look, my job is to transform you from a student who is looking for the answers to a scholar who’s looking for the questions. I’m actually looking for the good questions—that’s what a scholar does. Because a student needs to be given tools, I’m going to give you a toolbox. I’m going to give you tools that will allow you to find equilibrium and to take the second derivative. So, I’m going to give you a big bag of tools. And then you have to go out as a scholar and find the interesting things to build with them.”
I find economics interesting in that economists are the only people who have a toolkit, but they would build a very nice-looking house and then spend all day admiring the hammer—that I find to be quite odd—as opposed to saying, “Look, look at what I can do with this!”
Would you as a student have wanted to take the class you are teaching as a professor?
We, as economics instructors, need to quit thinking about ourselves in the front of the class and start remembering ourselves back in the class. Would you take a class from you? Are you good enough to teach this class? Are you doing it well enough?
If I went back in time, I’d tell myself [as a student]: “Pay attention, because there are several things I never want you to do when you’re up front. But there are some things I want you to do. So, I want you to watch.” And I’ve got to say that I think I’m at the point where I’m ready to teach previous me. It took only 35 years, but I think I’m ready to teach the class that I could actually appreciate. And I think that that’s where we, as economics instructors, have to be. Would you take your class? Would you?