I sometimes say that one difference between those who have been trained as economists and normal human beings is that economist don’t believe in real people, but instead believe in statistical people. My point is that normal humans tend to reason from examples of particular people: perhaps a person who lost their job, or a stock-picker who recommended buying Amazon back in 2000s, or someone who received the COVID vaccine but became sick anyway. Let us stipulate that these individual people are real; indeed, they can be interviewed on camera.
But any economist will be reluctant to draw conclusions about causal connections or policy choices related to unemployment or investment strategies or vaccines from individual stories of real people. An economist will want to know about the statistics of all the people who were working, and which ones became unemployed; or the statistics that capture all the investment predictions made by a stock-picker, not just the ones that turned out well; or all the people who were vaccinated and what happened. Any single person being interviewed on camera may or may not represent the broader statistical reality. As the old saying goes, “The plural of `anecdote’ is not `data.'”
Daniel Simons and Christopher Chabris have developed a more elegant way of making this point clearly, and they offer an overview in “How the Possibility Grid Can Help You Evaluate Evidence Better” (Behavioral Scientist, July 17, 2023)
Consider the example of Mr. Pink Shirt, who recommended buying stock in Amazon and in Tesla years before the prices soared. Let us stipulate that Mr. Pink Shirt did indeed offer this advice. Should you take stock-picking advice from this person? Simons and Chabris offer a possibility grid to evaluate Mr. Pink Shirt.
The upper left-hand corner is the information presented to you: that is, Mr. Pink Shirt picked some stock market winners. The gray squares are the information not presented to you: that is, you don’t know what duds he picked, nor what winners he did not pick, nor what duds he did not pick. They write:
To avoid being deceived, we don’t need to know exactly how many stocks are in each box—just thinking about the possible contents of the full grid tells us there is no reason to believe that Mr. Pink Shirt, a guy who made two good picks in fourteen years, is worth paying attention to now. The possibility grid is a universal tool to draw attention to what is absent. It alerts you to think about rates of success rather than stories of successes. Applied to scientific research, the possibility grid reminds us that we can’t evaluate the state of the literature by tallying up only the significant results—we also have to think about the studies that failed or never got published. And it tells us to be wary when someone claims that their intervention will improve your performance or your health if they don’t show that the gains they promise are more likely to occur with than without their product’s help.
As the authors point out, when there is a great success story about someone who “just went with their gut” or “just knew what to do” or “just followed their bliss,” you can’t evaluate whether that course of action is useful to follow unless you have information about the rest of the probability grid. Sometimes people are lucky or unlucky. Sometime unlikely things do happen: an event that has only a 0.01% chance of happening will in fact actually happen one out of every 10,000 times–but you might not want to rely on it happening to you.