2  From anecdote to insight

Anecdotes are great. They are true stories—often intriguing, relatable, and easy to understand. They provide vivid examples that make abstract ideas more concrete and memorable. Whether it’s a personal experience or a captivating story about a successful business leader, anecdotes resonate because they tap into our natural affinity for storytelling. Their simplicity and emotional impact can make them powerful teaching tools.

And importantly, anecdotes are hard to contradict. Take, for example, the argument that smoking can’t be that harmful because your 88-year-old uncle has smoked his entire life and he is still in good health. It’s a tough claim to refute, as it’s a real-life example. However, the problem lies in extrapolating a single, isolated case to draw broader conclusions, which can be misleading.

However, while anecdotes can be persuasive, their strength is also their weakness. They represent isolated instances, and while it’s hard to deny the truth of an individual story, the danger lies in overgeneralizing from it. Anecdotes lack the rigorous analysis and breadth of evidence necessary to draw reliable conclusions. They don’t account for the full complexity of most situations, especially in business, where decisions are influenced by many interconnected factors.

In business, relying too heavily on anecdotes can lead to misguided conclusions. For example, a company might base its strategy on the success story of a famous entrepreneur without considering the countless failed ventures that didn’t make the headlines. This is known as survivorship bias, where the successes are visible, but the failures are hidden.

The challenge, then, is to take anecdotes and go beyond them. Instead of drawing direct conclusions, use them as starting points for deeper investigation. They can provide valuable hypotheses but need to be supported by data, rigorous analysis, and an understanding of the underlying principles at play. Anecdotes can inspire curiosity and point us in interesting directions, but they should be tested against a larger body of evidence to ensure that the insights we draw are reliable and applicable in a broader context.

Exercise 2.1 Survivorship bias

Read “How Successful Leaders Think” by Roger Martin (2007) and the chapter “Identification” of “Quantitative Methods” by Huber (2024).

Here is a summary of Martin (2007) taken from the Harvard Business Review Store:

In search of lessons to apply in our own careers, we often try to emulate what effective leaders do. Roger Martin says this focus is misplaced, because moves that work in one context may make little sense in another. A more productive, though more difficult, approach is to look at how such leaders think. After extensive interviews with more than 50 of them, the author discovered that most are integrative thinkers–that is, they can hold in their heads two opposing ideas at once and then come up with a new idea that contains elements of each but is superior to both. Martin argues that this process of consideration and synthesis (rather than superior strategy or faultless execution) is the hallmark of exceptional businesses and the people who run them. To support his point, he examines how integrative thinkers approach the four stages of decision making to craft superior solutions. First, when determining which features of a problem are salient, they go beyond those that are obviously relevant. Second, they consider multidirectional and nonlinear relationships, not just linear ones. Third, they see the whole problem and how the parts fit together. Fourth, they creatively resolve the tensions between opposing ideas and generate new alternatives. According to the author, integrative thinking is an ability everyone can hone. He points to several examples of business leaders who have done so, such as Bob Young, co-founder and former CEO of Red Hat, the dominant distributor of Linux open-source software. Young recognized from the beginning that he didn’t have to choose between the two prevailing software business models. Inspired by both, he forged an innovative third way, creating a service offering for corporate customers that placed Red Hat on a path to tremendous success.

  1. Discuss the concepts introduced by Martin (2007) critically:
  • Does he provide evidence for his ideas to work?
  • Is there a proof that his suggestions can yield success?
  • Is there some evidence about whether his ideas are superior to alternative causes of action?
  • What can we learn from the article?
  • Does his argumentation fulfill highest academic standards?
  • What is his identification strategy with respect to the causes of effects and the effects of causes?
  • Martin (2007, p. 81) speculates:

“At some point, integrative thinking will no longer be just a tacit skill (cultivated knowingly or not) in the heads of a select few.”

  1. If teachers in business schools would have followed his ideas of integrative thinkers being more successful, almost 20 years later, this should be the dominant way to think as a business leader. Is that the case? And if so, can you still gain some competitive advantage by thinking that way?
Figure 2.1: Distribution of bullet holes in returned aircraft

Source: Martin Grandjean (vector), McGeddon (picture), Cameron Moll (concept), CC BY-SA 4.0, Link

  1. Figure 2.1 visualizes the distribution of bullet holes in aircraft that returned from combat in World War II. Imagine you are an aircraft engineer. What does this picture teach you?

  2. Inform yourself about the concept of survivorship bias explained in Wikipedia (2024).

  3. In Martin (2007), the author provides an example of a successful company to support his management ideas. Discuss whether this article relates to survivorship bias.

Martin, R. (2007). How successful leaders think. Harvard Business Review, 85(6), 71–81. https://hbr.org/2007/06/how-successful-leaders-think
Huber, S. (2024). Quantitative methods: Lecture notes. https://hubchev.github.io/qm/

Drawing insights from anecdotes is challenging, especially in business, for several reasons:

  1. Limited sample size: Anecdotes are usually individual cases that do not reflect the full extent of a situation. In business, decisions often require data from large, diverse populations to ensure reliability. Relying on a single story or experience can lead to conclusions that are not universally valid.

  2. Bias and subjectivity: Anecdotes are often influenced by personal perspectives, emotions or particular circumstances. Moreover, anecdotes often highlight success stories while ignoring failures. This is an example for the so-called Survivorship Bias.

  3. Lack of context and the inability to generalize: Anecdotes often lack the broader context necessary to understand the underlying factors of a situation. Business problems tend to be complex and influenced by numerous variables such as market trends, consumer behavior and external economic conditions. Many of these variables change significantly over time. Without this context, an anecdote can oversimplify the problem and lead to incorrect decisions. Anecdotes are usually specific to a particular time, place or set of circumstances. They may not apply to different markets, industries or economic environments, which limits their usefulness for general decision-making. For example, learning only from the tremendous success of figures like Steve Jobs while ignoring the countless people who failed is like learning how to live a long life by talking to a single 90-year-old person. If that person happens to be obese and a heavy smoker, it doesn’t mean those behaviors contributed to their longevity.

  4. Lack of data rigor: Anecdotes lack the rigor and precision of data-driven analysis where the empirical model that allows to identify causality and to measure the effect of causes is formally described.

Conclusion

To make informed business decisions, it is critical to base insights on systematic data analysis rather than anecdotal evidence, as anecdotes are too narrow, subjective and unreliable to guide complex business strategies.

Exercise 2.2 Systematic analysis as an alternative to anecdotal analysis

  • What defines a systematic analysis?
  • When can we say that we have ‘found evidence’?
  • When can we claim to have identified a causal effect?
  • When can we trust the size of an effect that we have measured?