Sean Park Portrait
Quote of The Day Title
Everything you can imagine is real.
- Pablo Picasso

Why smart (big) companies wind up taking dumb (instead of smart) risks

Over the last few years I’ve become increasingly interested in trying to understand why large companies – companies full of smart, ambitious people and commanding significant resources (financial, intellectual and human) – seem universally to find it so difficult, verging on impossible to take certain kinds of what I would consider ‘smart’ risks, while simultaneously having seemingly unlimited appetite for what I would consider ‘dumb’ risk.

Clearly, I should expand on this notion of ‘smart’ vs. ‘dumb’ risks, lest you too quickly accuse me of hindsight bias and ex-post triumphalism. So in my humble opinion

Smart risks, are risks where the impact of failure is limited and known upfront. Typically this would include the potential

  1. financial loss (outright and/or opportunity cost as relevant),
  2. reputational damage (in the sense that a failed outcome could result in any stakeholders being legitimately offended or repulsed; not in the sense that we might look a bit silly for having done something that failed) and,
  3. wasted time (not the financial cost – that would be ‘included’ in financial loss above, but literally the time it takes to discover / acknowledge failure – something that fails quickly being better relative to something that will take longer (to know if it has failed); this is based on the assumption that even the best endowed company does not have unlimited human and intellectual resources and so can only pursue a finite number of opportunities or ideas)

Smart risks, are risks where the impact of success is unlimited and potentially paradigm changing, and is in any event an order of magnitude bigger than the corresponding failure.

Dumb risks, are risks where the impact of failure is unlimited and unappreciated:

  1. financial loss (outright and ususually opportunity cost),
  2. reputational damage (in the sense that a failed outcome could result in any stakeholders being legitimately offended or repulsed; not in the sense that we might look a bit silly for having done something that failed) and,
  3. wasted time (not the financial cost – that would be ‘included’ in financial loss above, but literally the time it takes to discover / acknowledge failure – something that fails quickly being better relative to something that will take longer (to know if it has failed); this is based on the assumption that even the best endowed company does not have unlimited human and intellectual resources and so can only pursue a finite number of opportunities or ideas)

Dumb risks, are risks where the impact of success is limited and at best linear, and is in any event an order of magnitude smaller than the corresponding failure.

Ok fair enough, but surely I’m not suggesting that companies when faced with the choice between these two kinds of opportunities systematically pick the latter? Well yes I am Virginia. Let’s frame the distinction between ‘smart’ and ‘dumb’ risks a different way, and perhaps you’ll see why…

Opportunity A has a 90% chance of succeeding. It is an opportunity similar to many the company has pursued historically. The company knows these kind of opportunities well. If it succeeds, our business will be 12% better for sure. There is an 8% chance that it doesn’t work well and our business will be the same to 5% worse. There is a 2% chance that out-of-nowhere a meteor strikes and makes it a spectacular failure making our business 80-100% worse. The probability weighted outcome is approximately +9% and 98% of the time the outcome will be acceptable.

Opportunity B has a 70% chance of failing. It is an opportunity that is very different to anything the company has considered before. The company feels this opportunity is in uncharted territory. If it fails, our business will be 1% less well off that it would have otherwise been. There is a 28% chance that it doesn’t work as well as hoped but it does ok and as a bonus an uncharted territory is now mapped; the business is 7% better off. There is a 2% chance that this opportunity strikes gold – is paradigm shifting – and the company is 3x better off than before. The probability weighted outcome is approximately +7% but 98% of the time the outcome will be perceived as a failure.

If you ask me ‘Opportunity B’ is the smart risk. Why? Because ‘Opportunity A’ has as a possible outcome blowing up your company. Bad. And by the way, who is to say that 2% is the true probability. Taleb articulated this line of thinking much more robustly and eloquently in The Black Swan:

…some business bets in which one wins big but infrequently, yet loses small but frequently, are worth making if others are suckers for them and if you have the personal and intellectual stamina. But you need such stamina. You also need to deal with people in your entourage heaping all manner of insult on you, much of it blatant. People often accept that a financial strategy with a small chance of success is not necessarily a bad one as long as the success is large enough to justify it. For a spate of psychological reasons, however, people have difficulty carrying out such a strategy, simply because it requires a combination of belief, a capacity for delayed gratification, and the willingness to be spat upon by clients without blinking.

People like to take linear (as opposed to non-linear) risks. They see safety in risks that are likely to succeed the vast majority of the time (even when failure, although rare, would be catastrophic.) They don’t want to risk ridicule by taking risks that are different and seem likely to fail (even when failure is very inexpensive, and success would bring enormous success.) Finally, people create institutions and models and processes that reinforce and validate how they already would like to behave.

The answer (to the headline question) is complex and somewhat varied, but one element is probably universal and arises out of the behavioral psychology of large, centrally-managed groups; John Kay frames this particular element well in his column this week:

For a time, I ran a company that sold models to large corporations. Although we urged clients to use these models in their decision-making, we did not actually do so ourselves. When I posed the question why, I realised that our analysis served the same function for our clients as Darwin’s list of pros and cons. People did not use our models to help make decisions, but to justify decisions they had previously taken. The results might be used internally to seek approval for an investment or an acquisition, or externally to persuade investors or regulators to give support. The board, or the main shareholders, would insist on the appearance of the formal process we were hired to provide.

blog comments powered by Disqus