Do Success Stories Cause False Beliefs About Success?
Does explicitly acknowledging bias make us less likely to make biased decisions? A new study examining how people justify decisions based on biased data finds that this is not necessarily the case.
- Researchers examined how hearing business success stories skews their guesses about how successful other startups will be in the future.
- Simply showing outlier examples of success substantially affected participants' beliefs about what makes a company successful.
- This work shows that it is possible to lead people to unwarranted conclusions using manipulation that would easily pass a fact check — and that being aware of the information's bias is not enough to offset these effects.
Dime-a-dozen explanations of what makes companies successful are not new—these narratives have enjoyed a rich history of exploration in both academic and popular presses. However, the types of success stories which gain traction are fraught with bias. For instance, disproportionate attention is placed on wildly successful “unicorn” companies, while the far greater number of unsuccessful companies often goes unaccounted for. Moreover, attempts to decipher what makes companies successful based on these examples are similarly skewed; explanations often highlight certain shared traits of successful companies while ignoring others, or turn a blind eye to the unsuccessful companies which exhibit the same “successful” traits.
The result? According to the researchers behind “Success stories cause false beliefs about success,” such non-representative sampling and explanatory cherry-picking means that “almost any feature of interest can appear to be associated with success” as long as there exist at least some examples of such an association.
In their new paper published in Judgment and Decision Making, Stevens University Professor Duncan Watts, along with co-authors George Lifchits and Ashton Anderson (University of Toronto) and Daniel Goldstein and Jake Hofman (Microsoft Research), set out to determine whether these plainly biased narratives cause readers to arrive at incorrect inferences about reality—and, if so, whether these effects are large enough to matter.
Testing biased data’s persuasive potential
Using a large-scale experiment, the authors examined the ways in which widely read—but clearly partial—success narratives affect the choices that people make, how confident they are in those choices, and the justifications they provide for them.
Participants were tasked with predicting whether a startup founded by a college graduate or a college dropout is more likely to become a billion-dollar “unicorn” company. Before making their decision, each participant was presented with either a set of successful college graduates, a set of successful college dropouts, or no data, and was required to verify that they understood the underlying bias in the examples shown. They were then asked to bet on either an unnamed graduate founder or an unnamed dropout founder, to indicate how confident they were in their decision, and to optionally provide justification of their bet.
Despite the participants acknowledging the bias in the data they were presented with, Lifchits et al. found that simply showing biased examples of successes substantially affected their beliefs relative to not showing examples at all. Participants who saw examples of graduate founders bet on an unnamed graduate founder 87% of the time, compared to only 32% of participants who were shown examples of dropout founders and 47% of participants shown no data.
While these numbers by themselves may be evidence of biased narratives’ ability to sway individual decisions, they do not necessarily indicate the power to significantly shift beliefs. If participants were generally unsure of which founder to choose and were only mildly influenced by the examples they were shown, one could expect them to report low levels of confidence in their decisions.
Interestingly, however, the authors observed the opposite: despite seeing opposing examples or no data at all, the overwhelming majority of participants expressed substantial levels of confidence. What’s more, 92% of participants provided genuinely motivated justifications for their bets, indicating a tendency to spontaneously generate causal explanations—such as college graduate founders being more motivated, or college dropout founders being more creative—to rationalize their decisions even in the absence of supporting evidence.
A threat beyond fake news
Watts and his colleagues find their research has worrying implications for the information ecosystem surrounding topics such as politics, science, and health, where technically correct but misleadingly presented data can be widely persuasive. Their work shows that it is possible to lead individuals to arrive at unwarranted conclusions using manipulation that would easily pass a conventional fact check—and, moreover, that being aware of bias alone is not enough to offset these effects.
Watts is the director of the Computational Social Science Lab (CSSLab) at Penn. Beyond building off of the literature on decision-making and bias, this research dovetails nicely with the CSSLab’s Penn Media Accountability Project (PennMAP), which aims to detect patterns of bias and misinformation in media from across the political spectrum. The authors underscore the need to broaden the study of misinformation to encompass content which is factually correct but significantly biased, a mission which PennMAP is pursuing using large-scale cross-platform media data. Such timely research will help to paint a more comprehensive picture of how biased narratives shape individual and collective beliefs, the consequences of irresponsible information distribution, and the importance of media accountability.