Odebrecht: “The Largest Foreign Bribery Case in History”

If you don’t know about the Odebrecht case, which the the US Department of Justice called “the largest foreign bribery case in history,” Nicolás Campos, Eduardo Engel, Ronald D. Fischer, and Alexander Galetovic tell the story and offer some insights in “The Ways of Corruption in Infrastructure: Lessons from the Odebrecht Case” in the Spring 2021 issue of the Journal of Economic Perspectives (where I work as Managing Editor). They describe the company’s rise and fall this way:

In 2010, the Swiss business school IMD chose Odebrecht, a Brazilian conglomerate, as the world’s best family business. Odebrecht was chosen for the excellent performance of its companies, its continuous growth, and its social and environmental responsibility. Sales had quintupled between 2005 and 2009,
and Odebrecht had become Latin America’s largest engineering and construction company and ranked 18th worldwide among international contractors (Engineering News-Record Magazine 2009).

By 2015, however, Odebrecht chief executive Marcelo Odebrecht had been arrested on corruption charges. Nine months later he was sentenced to more than 19 years in prison. The Odebrecht case, as it came to be known, involved bribe payments in ten countries in Latin America and two countries in Africa. Deltan Dallagnol, lead prosecutor in Brazil, commented (as reported by Pressly 2018): “The Odebrecht case leaves you speechless. This case implicated almost one-third of Brazil’s senators and almost half of all Brazil’s governors. A single company paid bribes to 415 politicians and 26 political parties in Brazil. It makes the Watergate scandal look like a bunch of kids playing in a sandbox.”

These bribery cases mainly involved bidding on large government infrastructure projects. involved US Department of Justice became involved because many of the bribes were paid through US banks. Here are a few of the details that jumped out at me.

  1. The sheer brazenness of the bribery was, in its own way, impressive. the Odebrecht company had an actual division devoted to bribery, with standardized reporting and procedures. The JEP authors write:

By 2006, bribery at Odebrecht had become so institutionalized that the company created the Division of Structured Operations (DSO), a stand-alone department dedicated to corruption. According to the plea agreement between the Odebrecht chief executive officer Marcelo Odebrecht and the US Department of Justice, the DSO specialized in buying influence through legal and illegal contributions to political campaigns and also in paying bribes to public officials and politicians. Within the DSO, three full-time executives and four experienced assistants were responsible for paying bribes to foreign accounts. Bribe payments followed a clear organizational flow. A contract manager would deal with potential bribe recipients—public officials and politicians—and reported to the country manager. The country manager could approve small bribes paid with local funds. Larger bribes were vetted by an executive reporting directly to the Odebrecht chief executive officer who often made the final decision.

2. Odebrecht wasn’t a case of huge bribes being paid and huge company profits as a result. In this sense, the pre-existing rules on public bidding did limit the most egregious forms of corruption. Indeed, the authors suggest that the size of the bribes was pretty much equal to the size of additional profits. Thus, this seems like a case of opportunistic bribes paid at key points, but more with the goal of dramatically expanding the size of Odebrecht (as noted above, the company sales quintupled from 2005-2009), rather than increasing its profit margins.

3. The countries where the bribery took place typically had rules to try to ensure fair bidding for public projects. But as it turned out, these rules often had some key vulnerabilities. One was that the rules often had conditions that firms could only bid if the company had certain technical and business qualifications. This sounds reasonable enough–but it meant that when the qualifications had a degree of subjectivity, was sometimes possible to bribe the public officials who were looking at these qualifications, so that they would disqualify some other potential bidders. But perhaps the main vulnerability was that when a company won the project, it could then later re-open negotiations for higher payments, or additional payments for expanding the scale of the work, and so on. Thus, a firm could bid low on a project, but then bribe the re-negotiators later in the process.

The public policy lessons here–applicable around the world–are to make the rules about bidding on public projects as objective as possible, and to have a high degree of openness, including the possibility of putting additional work out for open bids, when contract renegotiations come up.

The Potential of Geothermal Energy

There isn’t likely to be one magic bullet that addresses all the issues related to carbon emissions in the atmosphere. Instead, you take your partial solutions where you can find them, and call it progress. Could geothermal energy be one of those partial solutions? Eli Dourado offers a useful overview of the state of play in “Harnessing the Heat Beneath our Feet” (PERC Reports, Summer 2021).

Geothermal energy finds ways to harness heat from the earth’s core and turn it into electricity. In some ways, this is old technology. As Dourado writes:

Humans have produced electricity from the earth’s subsurface heat since 1904, when Italians first harnessed geothermal steam at Larderello, in Tuscany. Today, the same site produces enough power for 10 million Italian households, about 10 percent of the world’s geothermal electricity. … Conventional geothermal technology is only deployed at sites where subsurface heat makes itself evident through visible features like hot springs, geysers, and fumaroles. The main geothermal field at Larderello is called Valle del Diavolo—Valley of the Devil—because it contains springs of boiling water.

Thus, the question for geothermal for some time has been whether it was limited to the few locations where the heat was literally bubbling up to the earth’s surface, or whether it was possible, in a substantially larger number of locations, to reach down below the earth’s surface. It turns out that some of the underground drilling technologies used in fracking can also be used, in a different way, to set up geothermal sources of electricity. One option is what Dourado called “conventional geothermal”:

Conventional geothermal wells are technically hydrothermal—they work by extracting steam from a production well. Typically this steam flows upward through hot porous rock, acquiring heat energy along the way, but then gets trapped under impermeable caprock. Placing a production well where the steam is trapped gives it only one way to go—up the well, where, at the surface, it can power a turbine to produce electricity. A second well, called an injection well, is used to put water back into the system, without which the supply of steam would eventually dry up and lose pressure. Producing hydrothermal energy is pretty simple, and it would be very cheap at scale, but it requires this subsurface configuration—hot porous rock topped with impermeable capstone—to work.

The current research is to find ways of generating geothermal electricity in a wider range of subsurface configurations. As one example, there are “closed-loop” designs where the news drilling technologies first go down, and then go horizontally underground. The “working fluids” that are injected to generate steam are recaptured and re-injected. Start-up companies and academic researchers are are trying out different approaches. It turns out that in a US context, most of the sites that might be suitable for geothermal electricity are in the western part of the country, and thus, given the enormous amounts of western land still owned by the federal government, the willingness of the feds to allow development of geothermal resources on federal lands is likely to make a big difference.

Another recent overview of geothermal is “Can Geothermal Power Play a Key Role in the Energy Transition?” by Jim Robbins in Yale Environment 360, an online magazine published by the Yale School of the Environment (December 22, 2020). Robbins offers an example of geothermal energy–albeit for heating rather than electricity–from Boise, Idaho:

A river of hot water flows some 3,000 feet beneath Boise, Idaho. And since 1983 the city has been using that water to directly heat homes, businesses, and institutions, including the four floors of city hall — all told, about a third of the downtown. It’s the largest geothermal heating system in the country. Boise didn’t need to drill to access the resource. The 177-degree Fahrenheit water rises to the surface in a geological fault in the foothills outside of town. It’s a renewable energy dream. Heating the 6 million square feet in the geothermally warmed buildings costs about $1,000 a month for the electricity to pump it. (The total annual cost for depreciation, maintenance, personnel, and repair of the city’s district heating system is about $750,000.)

Of course, most places aren’t located above an accessible river of steaming hot water. But the warmth of geothermal can be used for “district heating” in some cases; in Iceland, which sits on abundant geothermal resources, about 90% of all homes are heated with geothermal energy. In addition, geothermal for electricity is already in wider use in the US than you might think. Robbins notes:

Even though geothermal is barely on the alternative energy radar, the U.S. already produces 3.7 gigawatts (GW) of geothermal electricity, enough to power more than 1 million homes. It’s the world’s leading producer — primarily in central California and western Nevada. California has 43 operating geothermal generating plants, and is about to build two more.

There’s also a 2019 report from the Geothermal Technologies Office at the US Department of Energy. Robbins describes a central finding of the report this way:

And a 2019 U.S. Department of Energy (DOE) report — GeoVision: Harnessing the Heat Beneath Our Feet — refers to the “enormous untapped potential for geothermal.” By overcoming technical and financial barriers, the report says, generating electricity through geothermal methods could increase 26-fold by 2050, providing 8.5 percent of the United States’ electricity, as well as direct heat.

Geothermal certainly isn’t going to address climate change issues all by itself. But unlike intermittent sources of carbon-free energy like solar and wind, geothermal does have one great advantage: It is always on.

The Confidence of Americans in Institutions

In early July, the Gallup Poll carried out an annual survey in which people are asked about their confidence in various institutions. Here are some of the results, as reported at the Gallup website by Jeffrey M. Jones, “In U.S., Black Confidence in Police Recovers From 2020 Low” (July 14, 2021) and by Megan Brenan, “Americans’ Confidence in Major U.S. Institutions Dips” (July 14, 2021).

This figure shows the share of people who express “A great deal/Quite a lot of confidence” in each of these institutions. The overall percentage of approval is on the far right, and the breakdown by white, black, and Hispanic is shown by the dots.

20210713_RacialGroupsv3_@2x

For me, figures like this lead to lots of inner conversations, and I will spare you most of that. But since I’ve been reading a fair amount about policing lately, here are a few thoughts:

  1. It was interesting to me that while whites were the group expressing the most confidence for the top few items on the list, Hispanics were expressing the most confidence for many of the items lower on the list. The one institution in which blacks expressed the greatest confidence was “Television News.”
  2. The extremely low levels of confidence expressed for Congress, the media, big business and big labor, and other areas is worth some reflection
  3. The biggest gap between confidence of whites and blacks appears for the police.
  4. The survey asks separately about “The Police” and “The criminal justice system.” The confidence level in the police is far higher for every group than the confidence in the rest of the criminal justice system.
  5. Despite a year of intense controversy over the police, they still rank well above many of these other categories in terms of public confidence.
  6. A separate figure shows the confidence in police for blacks and whites over time. It’s interesting to note that the confidence of blacks in this area was already declining in the 214-2019 time frame, and that after a very sharp decline in 2020, there has been something of a bounceback in 2021.
Police_race2

Here’s one more figure, this one showing a breakdown of the same categories by political party.

Republicans are vastly more confident in the police, organized religion, the military, and small business. Democrats are vastly more confident in the presidency, newspapers and television news, public schools, and organized labor. The lack of approval for Congress, the criminal justice system, banks, and big business is largely bipartisan.

How Do the Very Wealthy Invest Differently?

It’s hard to get data on the investment patterns of the very wealthy. Many surveys are intended to cover the entire population, from to bottom, so they don’t offer many data points for looking at the behavior of the top 1% or the top 0.1%. In addition, any survey about economic facts, like personal wealth, is only as good as the memories and willingness-to-disclose of those taking the survey.

Thus, one of the hot topics in economics research is finding ways to access “administrative” data–which refers to data that was collected for other purposes, but in an appropriately anonymous form can be made available to researchers. For example, researchers looking at income inequality have found ways to used appropriately anonymized income tax data. For wealth data, Cynthia Mei Balloch and Julian Richers found a source of such data to address the question of “Asset Allocation and Returns in the Portfolios of the Wealthy” (presented at the 2021 Conference on Research in Income and Wealth held at the Summer Institute of the National Bureau of Economic Research, July 19-20, 2021). Just in case this isn’t yet broad knowledge in the economics community, I’ll also add that the NBER holds many workshops, methods lectures, and mini-conferences during its Summer Institute, and hours of material of of top research economists presenting their current work is available at the NBER YouTube page).

Here’s how Balloch and Richers describe their data:

[W]e use anonymized portfolio-level data from Addepar, a leading technology provider for the wealth management industry. Addepar provides an advanced financial reporting and analysis software platform for private wealth advisors. These advisors range in scale from single family offices to large wealth management firms with thousands of individual advisors and client portfolios. Advisors use the platform to get a comprehensive picture of asset holdings and returns across different asset and sub-asset classes, ranging from standard equity and fixed income investments to private equity, real estate and collectibles. While individual investors can access their own account data directly, advisors are the primary users of the software. These include family offices, private wealth advisors at banks, and
other advisors. Across 373 managing firms, we observe over 50,000 client portfolios on the platform, each representing an individual household. The range of total holdings ranges from the mid-six
figures to multi-billion dollar portfolios, with an average total size of portfolios of 16.8 million
(median 1.3 million) at the end of 2019. By this time, there are close to 1 trillion in assets
recorded on the platform …

With this data, the authors are observing changes in market values over time; for example, they can see both realized and unrealized capital gains. Because the wealth managers want to know about all aspects of health, this data also includes information on wealth held in private businesses and in real estate. Again, this data is the actual investments of the wealthy, not what the wealthy say when they fill out surveys about their wealth.

What are some of the main patterns that emerge?

One is that as wealth increases, people are more likely to put a larger chunk of their money in “alternative assets,” which is a category that refers to special funds like private equity funds or hedge funds. A second pattern is that as wealth increases, the average rate of return goes up, but so does the level of risk:

Among investors with less than three million in assets under management, the average return is 4.38 percent, while for investors with more than 100 million in wealth, the average return increases to 6.37 percent. This pattern of increasing return is mirrored in the standard deviation of returns, which rises from 13.9 percent among the least wealthy investors to 19.8 percent at the top of the wealth distribution.

Among other kinds of investments–bonds, stocks, mutual funds, exchange-traded funds–the returns on assets for the wealthy are basically the same as they are for everyone else, after adjusting for the risk of each kind of investment. However, the returns that the ultra-wealthy get from hedge funds and private equity funds are substantially higher than the returns that the less-wealthy get from these categories of investments. This pattern suggests that the ultra-wealthy have access to some combination of better money managers and better investment opportunities than the rest of us.

Those familiar with patterns of how college and universities have invested their endowments in the last few decades will recognize this pattern (for discussion, see “Some Snapshots of University Endowments,” July 22, 2019). The big kids like Yale, Harvard, and others crowded into various categories of alternative investments back in the 1990s, and have made outsized returns in doing in the last few decades. Indeed, the enormous endowments that the wealthiest universities have built up are just as much (or even more) due to canny investment teams as they are to big donors. However, universities and colleges with smaller endowments who attempted to follow the same pattern have generally not been as successful.

Of course, the follow-up question is whether there might be a way to give smaller-wealth investors a chance to benefit from these higher-return alternative investments.

What’s Stirring in Prediction Markets?

A number of financial markets let investors make predictions about the future path of prices and risks: for example, those who buy a stock are thinking that the price will rise. There are also “futures” markets, which allow someone to promised to deliver a certain item at a certain prices at a given date in the future. Some futures markets are based on financial prices like the price of the Standard & Poor’s stock market index; others are based on prices of physical items like oil, gold, wheat, soybeans, and many others.

When economists refer to a “prediction market,” they have in mind a market where instead buying and selling based on expectations of future prices, the buying and selling happens based on expectations of future events. One of the best-known examples is the Iowa Electronic Markets, where you can place small-sized bets on the outcome of elections and related events. Rising or falling prices in this market can then be used as a measure of the likelihood of who will win the election.

But back in the early 2000s, the idea of making more widespread use of prediction markets took a serious public relations hit. Justin Wolfers and Eric Zitzewitz tell the story in their article “Prediction Markets” in the Spring 2004 issue Journal of Economic Perspectives (where I work as Managing Editor). They wrote:

In July 2003, press reports began to surface of a project within the Defense Advanced Research Projects Agency (DARPA), a research think tank within the Department of Defense, to establish a Policy Analysis Market that would allow trading in various forms of geopolitical risk. Proposed contracts were based on indices of economic health, civil stability, military disposition, conflict indicators and potentially even specific events. For example, contracts might have been based on questions like “How fast will the non-oil output of Egypt grow next year?” or “Will the U.S. military withdraw from country A in two years or less?” Moreover, the exchange would have offered combinations of contracts, perhaps combining an economic event and a political event. The concept was to discover whether trading in such contracts could help to predict future events and how connections between events were perceived. However, a political uproar followed. Critics savaged DARPA for proposing “terrorism futures,” and rather than spend political capital defending a tiny program, the proposal was dropped.

After that experience, while prediction markets have continued to exist, they have mainly been in small and limited forms. For example, the Iowa Electronic Markets is limited in size as an experimental market designed specifically for research and teaching purposes. It has become fairly common for big companies like Google and Ford to run “internal” prediction markets, where those inside the company can place bets on issues like whether project deadlines or sales targets will be met; it often turns out that the feedback from the internal market is a useful corrective to the promises from managers that everything is going just fine. The Hollywood Stock Exchange lets you place a bet on what the sales totals of a movie will be in the weeks after it is released. Of course, Americans can also bet on various outcomes using better markets in other countries–or just watch those market to see what messages they are sending.

But now there are some stirrings that US prediction markets never completely went away, and may be ready to rise again. Mary Brooks an Paul Rosenzweig tell the story in “Let’s Bet on the Next Big Policy Crisis—No, Really” (Lawfare blog, July 13, 2021). They point to some new academic experimentation beyond the long-running Iowa Electronic Markets:

More recently, Georgetown University built out its own crowd-forecasting platform—which is not strictly prediction market but rather a way of surveying and pooling expert opinions—specifically for geopolitical futures. Similarly, Metaculus offers a platform for a quasi-prediction market, in which the currency of exchange is prestige points, and anyone can submit a question for inclusion in the market.

They point out that use of internal corporate prediction markets has been rising, and indeed, the there is a market for those with top-secret clearance in the US intelligence community:

[T]here is significant demand for internal corporate prediction markets and crowd-forecasting. Google, Ford, Yahoo, Hewlett-Packard, Eli Lilly and a number of other prominent corporations have operated or continue to operate a corporate market. Some of their questions may delve into geopolitics, but in most cases employees bet on subjects such as whether deadlines will be met, what products will take off and what earnings statements will be. …

For example, in 2010, the intelligence community started a prediction market for top-secret-cleared government employees on its classified networks. From 2011 to 2015, the Intelligence Advanced Research Projects Activity (IARPA)—the intelligence-minded sister of DARPA—ran the Aggregative Contingent Estimation (ACE). ACE was a project designed to “dramatically enhance the accuracy, precision, and timeliness of intelligence forecasts … [by means of] techniques that elicit, weight, and combine the judgments of many intelligence analysts.” Today, IARPA still runs the Hybrid Forecasting Competition, which “develop[s] and test[s] hybrid geopolitical forecasting systems.”

But perhaps most interesting, there is now a company called Kalshi, approved by US government regulators, which will allow trading on the outcome of events. Brooks and Rosenzeweig write:

Just this past week, a prediction market that operates as a true financial exchange opened its digital doors. Kalshi—a San Francisco-based startup currently operating in beta—is the first fully regulated (CFTC-approved) prediction market. Because Kalshi is regulated, more significant amounts of money can be wagered than in many other markets, enabling them to build out a new asset class of events futures. The implications for this are obvious: An asset class like this could serve as an alternative or a supplement to more traditional insurance, allowing companies and individuals to hedge against crop failures, cyberattacks or floods.

It’s of course easy to raise concerns about prediction markets. Will they allow some people to benefit when an unpleasant event occurs? Yes, but so do any number of completely legal investments one can make in other financial markets (like those related to stock prices of insurance companies). Can they be gamed by investors? While a group of investors can certainly drive the price in one direction or another, there is the ultimate question of whether the event actually happens or not. Those who seek to drive the prediction market price to strange places need to be prepared to lose money.

No one says the market prediction is perfect: in sports betting, for example, the favorite doesn’t always win. But prediction markets provide a way of bringing together inputs of information and belief from a wide variety of people, rather than relying on other imperfect methods of prediction like listening to insider experts, outside experts, or polling data. If you think the chance of a future event embodied in the prediction market prices is wrong, that’s fine; are you willing to back your belief with actual cash? If you feel queasy about doing so, perhaps you should reconsider how strongly you hold that belief.

Interview with Daron Acemoglu: Technology and the Shape of Growth

Michael Chui and Anna Bernasek of the McKinsey Global Institute have a half-hour interview with Daron Acemoglu. “Forward Thinking on technology and political economy with Daron Acemoglu” (July 14, 2021 audio and an edited transcript available). Acemoglu has focused for decades on the idea the idea of “directed technological change”–that is, the idea that the direction of technology is not a random event determined by scientists, but is to some extent a response to the incentives of what areas are being investigated by researchers and the incentives for firms and entrepreneurs in applying new ideas in the economy.

In Acemoglu’s view, economists of the past too often treated “technology” as a general all-purpose ingredient that tended to raise productivity of workers. In contrast, Acemoglu points out that technology often has quite particular effects. He says:

And if you look at the way that economists think about technology, it’s this latent variable that makes you just more productive. But very few technologies actually do that. Electricity didn’t make workers more productive. It made some functions in factories more feasible, and some few items more productive. A hammer doesn’t make you more productive in everything. It makes you just more productive in one single, simple task—hammering a nail. And many technologies don’t even do that. The example of spinning and weaving machinery that I gave, or the factory system, or, more recently, databases, software, robots, numerically controlled machines, they are mostly about replacing workers in certain tasks that they used to perform. …

It may benefit some workers more than others. It may well be that computers augment educated workers more than high school dropouts, so inequality can increase. But at the end you shouldn’t see the high school dropouts lose out. Their real wages shouldn’t decline. And the real wages of workers shouldn’t decline. But, in fact, one of the striking but very robust features of the last 40 years of economic development in the US and the UK has been that many groups, especially low-education or middle-education men, have actually seen their earnings fall, some groups by as much as 25 percent, in real terms, since 1980. Phenomenal. This isn’t the American dream.

In the traditional economics approach, this is a nuisance that we often sweep under the carpet. ,,, [I]it is something that doesn’t really fit into this technology as augmenting framework. But when technology, at least in part, is about automation, replacing, displacing workers from their tasks, then this happens quite often. You can have productivity improvements—capital benefits, firms benefit, but workers, especially some types of workers, all workers overall can lose out in real terms. …

[O]nce you go to this micro level, then the direction of technology, the future of technology looked at through the perspective of what type of technologies we’re going to build on, that becomes much richer and much more interesting. It’s not just whether we’re going to increase the productivity of skilled workers versus unskilled workers, both of which benefit all of them since they are complementary. It’s more like, are we going to completely give up on unskilled workers? Are we going to try to replace them? Are we going to try to replace humans? Are we going to create new tasks for humans? How are you going to use the AI platform? All of these questions about the direction of technology become much more alive, and then also the productivity implications become much more interesting.

Acemoglu points out that from the 1950s through the 1970s, the US economy had a high rate of technological change and productivity growth with a pretty stable distribution of income. But since the 1980s, technological change and productivity growth have been accompanied by a more unequal distribution of income. What changed? Acemoglu argues that the overall effects of technological change will be determined by factors like whether it gives rise to new sectors of the economy with opportunities for displaced workers of the present and future workers, and whether the technologies generate large gains in productivity (think of large-scale mechanization of agriculture and how it raised output per acre) or if it just allows replacing workers with machines. In Acemoglu’s view, too much of the innovation surrounding modern automation, robotics and artificial intelligence is about replacing existing workers, rather than empowering new industries.

The details of an appropriate policy response here are hard to enunciate, and Acemoglu (wisely) shies away from offering granular recommendation. But he does offer these general thoughts:

One is we have to free ourselves from the excessive obsession on automation. It is true in the area of AI. It’s true in other areas, too. [In] our current business community, for a variety of reasons, some of it is cost cutting, some of it is because where the technology leaders in Silicon Valley have sort of set the agenda, some of it is because government policies are just too focused on automating everything.

Instead, we have to come back to a world in which we put as much effort in increasing human productivity, both in the tasks that they already produce, but also creating new tasks in entertainment, in healthcare. There are so many new things that we can do, especially with AI, but some of it with just our existing technologies, some of it with virtual reality or augmented reality. There are many, many things ranging from judgment, social skills, flexibility, creativity, that humans are so much better at than machines.

But we’re not empowering them right now. That’s the first leg. That second leg is that we also have to pull back from using AI as a method of control. And again, that’s about how we use the AI technology. Do we use it to empower individuals? To be better communicators, better masters of their own choices and data? Be able to sort of understand the veracity or the liability of different types of information? Or do we develop these tools in the hands of platforms so that the platforms themselves are doing all of that thinking and all of that direction for the individuals? I think that those two are very different futures as well.

For a more detailed version of Acemoglu’s argument, a useful starting point is his essay with Pascual Restrepo, “Automation and New Tasks: How Technology Displaces and Reinstates Labor,” in the Spring 2019 issue of the Journal of Economic Perspectives (where I work as Managing Editor). Basically, they suggest a framework in which automation can have three possible effects on the tasks that are involved in doing a job: a displacement effect, when automation replaces a task previously done by a worker; a productivity effect in which the higher productivity from automation taking over certain tasks leads to more buying power in the economy, creating jobs in other sectors; and a reinstatement effect, when new technology reshuffles the production process in a way that leads to new tasks that will be done by labor. They then apply this framework to US data.

That JEP symposium includes three other papers:

Where is Behavioral Economics Now?

Behavioral economics is the combination of insights from psychology with economic behavior. A usual starting point for models of economic actors is that while they will sometimes make wrong decisions, they will not make the same wrong decisions over and over. To put it another way, they will continually be trying to avoid what they now perceive as the errors of the past, while also committing a range of mistakes.

But psychological research suggests that in certain settings, many people will make the same mistake over and over. For example, many people have personal or economic habits (say, exercise, or eating healthier, or quitting smoking, or saving more) that they would like to change. They know with some part of their mind that if they don’t make a change, they are likely to regret it in the future. Nonetheless, they keep deciding they will start the desired change tomorrow, rather than today. There are many of these biases that, while one can learn to overcome them, seem built into cognition. People often do a poor job of thinking about risks that have a low probability of happening. Behavioral economics looks at the outcome of economic situations where some or many people have these biases.

For those who want to learn what behavioral economics is all about, a good starting point is The Behavioral Economics Guide 2021, edited by Alain Samson. It’s from the behavioraleconomics.com website, which serves as an online hub for people interested in the topic. I recommend the volume for at several purposes.

There’s a 30+ page glossary of behavioral science concepts, for those who would like a little help with the lingo, starting with “action bias” and ending with the “zero price effect.” Here’s “action bias:”

Some core ideas in behavioral economics focus on people’s propensity to do nothing, as evident in default bias and status quo bias. Inaction may be due to a number of factors, including inertia or anticipated regret. However, sometimes people have an impulse to act in order to gain a sense of control over a situation and eliminate a problem. This has been termed the action bias (Patt & Zeckhauser, 2000). For example, a person may opt for a medica treatment rather than a no-treatment alternative, even though clinical trials have not supported the treatment’s effectiveness. Action bias is particularly likely to occur if we do something for others or others expect us to act (see social norm), as illustrated by the tendency for soccer goal keepers to jump to left or right on penalty kicks, even though statistically they would be better off if they just stayed in the middle of the goal (Bar-Eli et al., 2007). Action bias may also be more likely among overconfident individuals or if a person has experienced prior negative outcomes (Zeelenberg et al., 2002), where subsequent inaction would be a failure to do something to improve the situation.

Here’s the definition of “zero price effect:”

The zero price effect suggests that traditional cost-benefits models cannot account for the psychological effect of getting something for free. A linear model assumes that changes in cost are the same at all price levels and benefits stay the same. As a result, a decrease in price will make a good equally more or less attractive at all price points. The zero price model, on the other hand, suggests that there will be an increase in a good’s intrinsic value when the price is reduced to zero (Shampanier et al., 2007). Free goods have extra pulling power, as a reduction in price from $1 to zero is more powerful than a reduction from $2 to $1. This is particularly true for hedonic products—things that give us pleasure or enjoyment (e.g. Hossain & Saini, 2015). A core psychological explanation for the zero price effect has been the affect heuristic, whereby options that have no downside (no cost) trigger a more positive affective response.

If you have a drop of social scientist blood in your veins, descriptions like this will start your pulse pounding. Do these effects really exist? In what context? How would you measure them? Are these effects intertwined with a different or perhaps broader effect? If you need concrete examples, the bulk of the report is 15 short and readable summaries of recent studies in the area, with examples concerning prevention of gender-based violence, banking and insurance, sustainable agriculture, career coaching, and more.

One theme that emerges from several papers in the volume is that behavioral economics effects are often deeply rooted in a particular context: that is, you can’t just grab an item from the glossary, plug it into your life or business or organization, and assume you know how it will work. For example, Florian Bauer and Manuel Wätjen write about their research in “Tired of Behavioral Economics? How to Prevent
the Hype Around Behavioral Economics From Turning Into Disillusionment.” They write:

Applying the behavioral economics effects found in academic experiments to marketing is becoming more and more popular. However, there is increasing evidence that copy-and-pasting academic effects does not achieve the desired effects in real life. This article aims to show that this is not because customers are becoming wise to nudges or that behavioral economics does not work at all, but because the application of behavioral economics typically ignores the contextual aspects of the actual decision to be influenced. Herein, we present a framework that considers these aspects and helps develop more effective behavioral interventions in marketing, pricing, and sales.

John A. List makes an argument that is similar in tone but broader in his introduction to the volume, “The Voltage Effect in Behavioral Economics.” List points out that it is fairly common for someone to latch on to an academic study in behavior economics, but then are disappointed when it doesn’t seem to “scale up” to a real-world context. List writes:

Indeed, most of us think that scalable ideas have some ‘silver bullet’ feature, i.e., some quality that bestows a ‘can’t miss’ appeal. That kind of thinking is fundamentally wrong. There is no single quality that distinguishes ideas that have the potential to succeed at scale with those that do not do so. In this manner, moving from an initial research study to one that will have an attractive benefit cost profile at scale is much more complex than most imagine. And, in most cases, scaling produces a voltage drop—the original BE [behavioral economics] insights lose considerable voltage when scaled. The problem, ex ante, is determining whether (and why) that voltage drop will occur. … What this lesson inherently means is that scaling, in the end, is a weakest link problem: the endeavor is only as strong as the weakest link in the chain.

In other words, the connection from an academic study to a real-world application involves a number of links in a chain. List lays out five of them:

  1. Infererence. Perhaps the academic study you looked at was a “false positive”–that is, the result won’t hold up in other similar studies. List suggests that before believing an effect is real, one should look for “three or four well-powered independent replications of the original finding.”
  2. Representativeness of the population. If a study was done on a group of college sophomores, or retirees, or people with a certain medical condition, you can’t necessarily assume that it apply well on other groups, like working-age adults or high school dropouts.
  3. Representativeness of the situation. For example, a small-scale program with a small group of dedicated and trained participants may not scale up well to a larger general population group that is less dedicated and less well-trained.
  4. Spillovers and general equilibrium effects of scaling. Say that I start a program in a certain area to teach a highly desirable skill to some workers. Those workers get much higher pay as a result. So then I expand the program very substantially to teach this skill to many more workers. The original group did well in part because it was a small group, and the skill was still scarce. But at some point, as the skill becomes common in that are, the rewards will be lower.
  5. Marginal cost considerations. As program expands, there are two possibilities. One is that there are economies of scale: that is, as the program covers more people, average cost per person falls. For example, a web-based program that can be expanded to cover many more people might have this property. However, the other possibility is that as the program covers more people, average cost per person rises. This can happen if the program needs some specific and particular skills that may be hard to get: for example, perhaps I can train a small group of teachers who volunteer to be part of my new specific curriculum, but as I try to train bigger and bigger groups, it gets harder.

List draws on Tolstoy to summarizes his thinking. At the beginning of Anna Karenina, Tolstoy famously wrote: “All happy families are alike; each unhappy family is unhappy in its own way.” List echoes: “[A]ll successfully scaled ideas are alike; all unsuccessfully scaled ideas fail in their own way.” I would just add that from this perspective, one of the advantages of behavioral economics is that it helps to drag social science generalities down into the realness of the particular.

As R&D Goes Global, How Should the US Respond?

There was a time, about a half-century ago, when a majority of the world’s research and development happened within the borders of the United State. Researchers, inventors, and firms didn’t move much. What was invented in the US had a tendency to remain, at least for awhile, as domestic US economic activity. Those times are now behind us.

Bruce R. Guile and Caroline S. Wagner raise this issue in “A New S&T Policy for a New Global Reality” (Issues in Science and Technology, Summer 2021). Their essay is part of the beginning of a promised series of essays on the “The Next 75 Years of Science Policy.” They write:

In the past, countries depended on the low mobility of researchers, inventors, and entrepreneurs to link R&D to innovation and innovation to wealth creation. When researchers were less mobile and less engaged in close collaboration with peers in other nations, new knowledge tended to be retained by institutions and the countries that housed them. From a national perspective, this arrangement had the benefit of aligning intellectual property ownership, early applications, and company growth with the location of the R&D activity. … 

At the same time that global collaboration has become ubiquitous, the rest of the world has begun doing more research. During the 1960s, US public and private R&D investment accounted for almost 70% of the global total. Today, even though US spending has increased, US R&D is less than 30% of the world’s total. Twenty nations now match or exceed US R&D intensity, with public and private R&D spending in these countries near or in excess of 2% of gross domestic product per year. In absolute dollars, China spends approximately the same amount on R&D as the United States. Furthermore, according to figures from the Organisation for Economic Co-operation and Development, China has nearly 2 million full-time equivalent researchers now, compared with the United States’ 1.5 million.

The common pattern is now that research and development happens in many places around the world, often coordinated by large companies, but also by the movement of academic and corporate researchers between places and organization.

Simultaneously, US multinational corporations have established global networks of research laboratories, research university relationships, and talent recruitment efforts that partially decouple them from the science and engineering enterprise in the United States. Virtually every technologically advanced manufactured good is created by a production process (supply chain) that crosses national borders several if not dozens of times and draws on innovations from many countries.

Being first with new scientific knowledge or having a pioneering innovative company based in the United States does not guarantee success in domestic industry. Nor does it guarantee that the nation will capture substantial economic value from the new knowledge. In a globalized knowledge network, knowledge spreads so quickly and widely that being in “first place” is a notional distinction at best. New scientific and engineering knowledge and innovation cross US borders in both directions—as part of commercial exchanges and collaborations—every day. Economic value cannot be captured by erecting barriers to the flow of knowledge or trade as the United States needs new knowledge and innovation from outside its borders as much as it needs robust US-based scientific and engineering capability.

Thus, the US economy is confronted with two realities. One is that a rising standard of living in the future, which in turn would make it so much easier to address our ongoing economic social and problems than the alternative of not having a rising standard of living, depends on the increases in productivity that grow in substantial part from new technology. But the other reality is that relying just on US-created R&D is going to be a poor strategy, because US-created R&D (like all R&D), is flowing much more easily around the world than ever before. What are the implications of this situation?

Guile and Wagner argue that it still very much matters for the US to have cutting-edge domestic R&D capabilities, but in the modern world, this means being an attractive location for researchers from around the world–and adjusting immigration policies accordingly. It also means systematic and expanded ” US government support for tracking and monitoring research activity and output, regardless of where it occurs, and support for dissemination of that information to US-based companies and centers of research.” Finally, it means paying more attention to the ingredients needed for the US economy to capitalize on new R&D, which implies a focus on policies for a workforce with the necessary skills, as well as the design of tax, investment, and trade policies.

For a couple of earlier posts on global R&D, with additional evidence and discussion of these themes, see:

“Global R&D: The Stagnant US Position” (February 7, 2020)

“Global R&D: An Overview” (February 11, 2013)

The Inflation Spurt: The CEA on Historical Parallels

Reporting on the Consumer Price Price Index numbers for June looked like this:

  • Consumer prices increased 5.4% in June from a year earlier, the biggest monthly gain since August 2008.
  • Excluding food and energy, inflation increased 4.5%, the largest move since September 1991.

Is the spurt of inflation the last few months likely to be sustained, or to fade? Chair of the White House Council of Economic Advisers Cecilia Rouse, along with Jeffery Zhang, and Ernie Tedeschi last week offered a short rumination on the “Historical Parallels to Today’s Inflationary Episode” at the CEA website (July 6, 2021).

Here’s a figure showing two prominent measures of inflation: the Consumer Price Index and the Personal Consumption Expenditures Index. The CPI is better-known to the public, but the PCE index is the one on which the Federal Reserve actually focuses (for my quick discussion of the differences between them, see here). For practical purposes, they move together fairly closely over time. On the far right, you can see the recent jump of inflation up to an annual rate of about 5%.

As the authors note, this figure shows six previous periods in which CPI inflation topped 5%: 1946–48, 1950–51, 1969–71, 1973–82, 1991, and 2008.  For the current episode, which parallel is most relevant? They argue that recent inflation spikes have been linked to rises in global oil prices, which hasn’t been happening, and that the inflation jump of the late 1960s was linked to an economy running white-hot with rapid growth and low unemployment, which isn’t the current problem, either.

The authors argue that the inflation of the 1970s was mainly the result of higher oil prices, as well, and while they are only writing a short piece, this seems to me like a significant oversimplification. There is sound evidence that in the lead-up to the 1972 election, the Federal Reserve under Arthur Burns responded to political pressure from President Richard Nixon by keeping interest rates very low in what was already an inflationary environment. There were also national wage and price controls first imposed and then removed by Nixon, and a later attempt to impose credit controls under Jimmy Carter. Of course, the 1970s spikes in oil prices didn’t help, either. But the inflationary period of the 1970s had number of contributing factors.

Ultimately, Rouse, Zhang, and Tedeschi argue that the most relevant parallel here is the post-WWII inflation episode. There are of course many differences. But as they point out, both the World War II period and the pandemic were times when savings rose substantially, suggesting the possibility of pent-up demand ready to surge into markets. Both episodes were also a time when economic dislocations meant that certain goods were not readily available; for example, the pandemic has been accompanied by shortages of durable goods like cars, in part because “manufacturing capabilities were temporarily shut down or reduced to avoid COVID contagion.”

It’s worth noting that in this short essay, Rouse, Zhang, and Tedeschi do not discuss the large and ongoing rise in federal debt. nor the role of the Federal Reserve in financing that debt, nor the potential for additional inflationary pressures from the existing spending proposals now before Congress. One can certainly sketch out a scenario along these lines where inflationary dangers look more severe. In particular, if consumers and firms expect a certain level of inflation, and then build that inflation rate into their expectations of future increases in prices and wages, then shorter-term inflation could build a longer-lasting momentum.

However, neither the markets nor the professionals seem to believe these scenarios. For looking at market expectations of inflation, you can compare two similar types of debt, one which adjusts for inflation and one which does not. For example, the US Treasury issues most of its debt in a way that does not adjust for inflation, but it also issues Treasury Inflation Protected Securities (TIPS), which adjust according to changes in the Consumer Price Index. Comparing these two will provide a market-based estimate of expectations of future inflation. Another measure of inflation looks at predictions from professional forecasters, which are tabulated in the Livingston Survey carried out by the Federal Reserve Bank of Philadelphia. Here are expectations of inflation from these two measures.

The two measures as presented here are not exactly comparable, because the market-based inflation expectations are for five years ahead, while the Livingston survey is for 10 years ahead. But either way, it looks as if future inflation expectations are in the range of about 2%, which is often taken as a reasonable working definition of “price stability.”

Those interested in some additional history of US inflation in the last 100 years or so might be interested in my post from a few years back, “100 Years of Consumer Price Inflation Data” (A[ril 30, 2014).

Snapshots of Global Wealth

The Credit Suisse Research Institute has published Global Wealth Report 2021, its annual report looking at levels and distribution of wealth around the world (June 2021). Here are a few snapshots and comments from the report.

This figure shows the wealth pyramid. As the report notes, “Our calculations suggest, for example, that a
person needed net assets of just USD 7,552 to be among the wealthiest half of world citizens at end-2020. However, USD 129,624 was required to be a member of the top 10% of global wealth holders, and USD 1,055,337 to belong to the top 1%.”

This image has an empty alt attribute; its file name is global-wealth-1.jpg

Here’s a summary of the wealth pyramid from the report:

We estimate that 2.9 billion individuals – 55% of all adults in the world – had wealth below USD 10,000 in 2020. The next segment, covering those with wealth in the range of USD 10,000–100,000,has seen the biggest rise in numbers this century, more than trebling in size from 507 million in 2000 to 1.7 billion in mid-2020. This reflects the growing prosperity of emerging economies, especially China, and the expansion of the middle class in the developing world. The average wealth of this group is USD 33,414, slightly less than half the level of average wealth worldwide. Total assets amounting to USD 57.3 trillion provide this segment with considerable economic leverage. …

The upper-middle segment, with wealth ranging from USD 100,000 to USD 1 million, has also expanded significantly this century, from 208 million to 583 million. They currently own net assets totaling USD 163.9 trillion or 39.1% of global wealth, which is nearly four times their share of the adult population. The middle class in developed nations typically belong to this group. Above them, the top tier of high net worth (HNW) individuals (i.e. USD millionaires) remains relatively small in size, but has expanded rapidly in recent years. It now numbers 56 million, or 1.1% of all adults. … HNW adults are increasingly dominant in terms of total wealth ownership and their share of global wealth. The aggregate wealth of HNW adults has grown nearly four-fold from USD 41.5 trillion in 2000 to USD 191.6 trillion in 2020, and their share of global wealth has risen from 35% to 46% over the same period.

What about if we look by region? This next figure is perhaps a little harder to think about, The horizontal axis shows the percentile in the global wealth distribution. The vertical axis shows the location of people in each region in that wealth distribution. The top panel shows the data for 2000; the second panel shows data for 2020, so you can see the shift. For example, back in 2000 the bulk of China’s population was in the 40th-80th percentiles of the global wealth distribution By 2020, China has much larger share in the 60th-95th percentiles

Finally, here’s some data by country, in a way that also shows some some interesting patterns and some reasons to be cautious about the data. This table shows the share of people in a given country who are millionaires. Remember, this is a measure of millionaires by wealth, not income. Wealth includes, for example, the savings accumulated in a retirement account and the equity in your home. Thus, this table does not say that 8.8% of Americans have $1 million in income last year. It says that when you add up retirements accounts, real estate equity, and other financial assets, 8.8% of Americans have more than $1 million in wealth. Presumably a disproportionate number of the people in this group are retired, and would have much less than $1 million in annual income.

Two points about the table are worth noting. First , I found it interesting–given how many people I run into who profess to like the income distribution patterns of northern European countries like–to see that Sweden and Denmark are much higher on millionaires per capita than, say, Canada, France, Germany, and the United Kingdom. But as a second point, the numbers in this table all measure millionaires as measured in US dollars, and thus require using an exchange rate to convert from the local currency to US dollars. As a result, shifts in the number of millionaires can easily reflect shifts in the US dollar exchange rate, rather than shifts in local currency.