I must admit “Fooled by Randomness” was a hard to digest food for my brain. I had the same feeling while I was reading Thinking: Fast and Slow by Daniel Kahneman. What do these books have in common? They both deal with human behavior, probability, stochastics, heuristics. These are concepts for which humans don’t necessarily have a born ability to understand and to deal with.

Taleb used to be a trader in the financial market before he got more into mathematics and the science of economics. He spend some time analyzing traders performance and to which degree randomness played a role whether chosen investing strategies turned out to be successful or not. When someone is successful in his/her field we tend to attribute certain skills and professionalism to that person, where in fact a success story was rather just luck.

Skills or just luck?

Just to give an example: Let’s take 10.000 managers and let them play a game:

Let’s have a look how this could look like:

Out of the initial 10.000 managers, only 313 managed to survive and win 10k EUR per year. Instead of say saying this group only survived purely out of luck, we tend to attribute them “high-level skills of fund managing”.

Black swans

When things go well, people use to say their decisions were the right ones. Especially when dealing with future predictions (if certain events will take place, how economic metrics will evolve) investors tend to rely on empirical evidence assuming that past events might be a relevant sample of what the future will look like. This empirical science is called induction and relates to the observations we make from which we then conclude things. Also known as the black swan problem, John Stuart Mill put it this way:

No amount of observations of white swans can allow the inference that all swans are white, but the observation of a single black swan is sufficient to refute that conclusion

What do we learn from black swans? Always consider your choices and assumptions today to be proved wrong some other day. Don’t (completely) ignore the “black swan” event as we can never be sure any theory is right. Things will evolve and thus change.

How can one survive in today’s “attention dragging” environment? How can we deal with the information influx and constantly reassess our predictions and assumptions? In our interaction with the outside world, our brain has developed strategies how to make quick decisions when needed. Daniel Kahneman distinguishes between System 1 and System 2 in Thinking Fast and Slow We have developed heuristics, some sort of shortcuts which lead to biases. There is confirmation bias, attribution bias, hindsight bias and lots more . All these influence our perception and the way we think (as the instincts we develop as we rely on biases. We might always find patterns and explanations for past events, but these are mostly useless for future events.

Sometimes you have this kind of events which you might consider rare to actually take place. These rare events exist because they are unexpected and mostly because we cannot have all information at our hands before taking a decision. One of Stochastics main idea is that the more information we have, the more we can predict a certain result/event. Also see Law of large numbers.

Asymmetric outcomes

Again, we have following example: Imagine you play a game where you have a 999/1000 chance of winning 1 EUR and a 1/1000 chance of losing 10k EUR. Before playing the game, it’s our human behaviour to make decisions on things “that are likely to happen”. But in this case this would be a mistake as the expected outcome is negative (probability is NOT expectation). Why? Let’s do the math:

We sum up the expectations we have: -10 EUR + 0.999 EUR = -9.001 EUR. So the expectation at this game is that you lose. Lots of money.

Probability blindness

Multiple doctors were asked to read following test description and answer the question. As you imagine, most of them failed to give the right answer. This example is taken from Randomness (by Deborah Bennett): She refers to this problem as the “base-rate misconception”

So 1 of 1000 people is affected by this disease. The test has a false positive rate of 5%. If someone is tested and the result is positive, how likely is it that this person is really infected? Most people would answer 95% (since the test has a false positive rate of 5%). But this is wrong and mainly because it’s a conditional probability.

Here is the explanation:

Now, among the persons with a positive result, who likely is it that these persons are also affected by the disease? To calculate this we need following (division):

1
2
number of affected persons /
number of positive test results (incl. false positive)

In this case this is 1 / 50 which is 2%! So there is a chance of only 2% to be really affected by the disease.

Conclusion

Somewhere in the middle of the book I’ve felt like I need to give up reading because I was somewhere lost between the termini from different areas. But I struggled through the entire book and now I’m happy I could at least extract some main points that sticked to my mind. I’ve found Monte Carlo simulations to be also quite interesting. To summarize what I’ve already mentioned in this post: We are all at some point fooled by randomness but we don’t give that much attention to it and often misinterpret outcomes as something deterministic or related to whatsoever skills.