Home > Poslovno svetovanje > Management > Woo-Kyoung Ahn: Thinking 101

How to reason and live better

Thinking problems happen because we are wired in very particular ways, and there are often reasons for that. Good news. There are actionable strategies we can adopt to reason better.

The allure of fluency

When asked if they think they are above average, between 60-70% of people say yes. ‘Not me’ bias is called when we think that others have some cognitive biases, but we are immune.

Things that our mind can easily process elicit overconfidence. This fluency effect can sneak up on us in several ways:

  • Illusion of Skill Acquisition.
  • Illusion of Knowledge.
  • Illusion Arising from Something Irrelevant.

When we see final products that look fluent, masterful, or just perfectly normal, we make the mistake of believing the process that led to those results must have also been fluent, smooth, and easy.

Short presentations are actually harder to prepare for than long ones. Because you don’t have time to think about your next sentence or feel your way toward the perfect transition.

The fluency illusion isn’t limited to skills like dancing, singing, or giving talks. You see a second type in the realm of knowledge. We give more credence to new findings once we understand how those findings came about.

People are more willing to derive a cause from a correlation when they can picture the underlying mechanism. There is no problem with that unless the underlying mechanism is flawed.

Judgments can also be distorted by the perceived fluency of factors that are completely irrelevant to the judgments they cause us to make.

The fluency effect stems from a simple, straightforward rule used in what cognitive psychologists call “metacognition”, which means knowing whether you know something. Knowing what we know tells us what to avoid, what to search for, or what to dive or not dive into. One of the most useful cues for metacognition is a feeling of familiarity, ease, or fluency. But familiarity is just a heuristic.

Know illusion is Ponzo’s illusion of two lines being the same size, looking different due to supporting lines. Named after Mario Ponzo.

Spelling out one’s knowledge can reduce overconfidence, even when no one gives us feedback. This is one way how to fight the fluency illusion.

Planning fallacy is another issue. We frequently underestimate the time and effort we need to complete a task. Reducing the size of tasks and breaking them apart can help with planning fallacy. It’s relatively easy to think of obstacles that are directly relevant to the task at hand. What tends to be neglected are the obstacles that have nothing to do with the task. Unexpected events are known unknowns.

Optimism is like engine oil for the fluency effect.

Data-driven predictions are more accurate than predictions based on intuitions or wishful thinking.

Confirmation bias

We cannot test all hypotheses, but the point is that when there are so many possible rules that can explain the available data, considering only the hypothesis that comes first to your mind won’t allow you to find the correct one in this case.

Peter C. Wason was a cognitive psychologist at University College London. He devised the famous 2-4-6 task in 1960, providing the first experimental demonstration of what he called confirmation bias, our tendency to confirm what we already believe.

Here’s another application of the 2-4-6 task that many readers must have personally experienced – the door-close button in elevators. When you’re running late or simply impatient, you stab the button repeatedly until the door finally closes.

Confirmation bias may not sound detrimental to the person who commits it. Those who commit confirmation bias may not even realize that their conclusion was wrong. Confirmation bias causes one to have an inaccurate view of oneself.

It may sound ironic, but confirmation bias is adaptive, helping us to survive, because it allows us to be “cognitive misers”. We need to conserve our brain power or cognitive energy for things that are more urgent for survival than being logical.

Herber Simon created in 1978 his explanation of this efficiency. We need to stop our searches when they are satisfying enough. He called this “satisficing”, a word he created by combining “satisfying” and “sacrificing”.

Life is indeed full of possibilities, definitely more than the number of atoms in the observable as well as the unobservable world, and it’s up to you to discover them.

The challenge of causal attribution

In January 1919 the leaders of victorious powers in the Great War met at the Paris Peace Conference. Woodrow Wilson didn’t want Germany to be punished too much. But he got the flu, and he wasn’t really in a position to fight against others. Can we say that if only Wilson did not get the flu, the world would be without Holocaust?

Some of the cues to causality that we commonly use:

  • Similarity. We tend to treat causes and effects as similar to each other.
  • Sufficiency and Necessity. We often think causes are sufficient and also necessary for an effect to occur.
  • Recency. When there is a sequence of causal events, we tend to assign more blame or credit to a more recent event.
  • Controllability. We are inclined to blame things that we can control rather than things that we cannot control.

Relying on similarity to make causal inferences can lead us astray because causes and effects are not always similar to each other. Because we rely on the similarity heuristic, we may be reluctant to endorse even a certain cause when it seems too dissimilar from the effect.

The problem is that when we settle on one cause because it seems sufficient to result in the outcome, in many cases we are discounting other equally possible causes. Another well-known example of discounting is the relationship between intrinsic motivation and extrinsic reward.

A condition that is necessary for an outcome is a great candidate to be a cause of that outcome. “But for” rule. Still, not all necessary conditions are causal.

We tend to pick unusual events as causes.

Another heuristic we use when picking a cause out of potential causal candidates is to blame actions more than inactions. Inaction is also invisible by definition, so we can easily neglect to consider how it could have caused specific events. Being oblivious to the costs of our inactions can cause problems that are irreversible.

When there is a sequence of events, we tend to credit the most recent one for the ultimate outcome.

One of the most important functions of causal reasoning is to control future events. We want to avoid mishaps and repeat good outcomes by identifying the reasons each happened.

Our propensity to assign blame when we believe there are controllable elements can result in radically different emotional reactions to the same outcome.

One way to approach extremely difficult or nearly unsolvable causal questions constructively is to distance yourself from the situation. How can we tell when a why question is answerable or not? Strictly speaking, no why question is answerable. We can never find out the true causes of any outcome. We cannot change just one thing in the past and assume the rest would stay the same. When we believe we’ve found the right answer to a why question, in a sense all that we’ve really done is found the best answer to what we’d have to do if we want the same outcome to occur the next time we are faced with a similar situation – and what we should avoid doing if we want a different result.

The perils of examples

There are at least three key concepts that all of us need to better understand if we are to avoid making blatantly irrational judgments in everyday life. They are:

  • The law of large numbers.
  • Regression toward the mean.
  • Bayes’ theorem.

The law of large numbers is one of the most important principles to follow when we need to make interferences from limited observation. It simply means that more data is better. Although we intuitively understand the law of large numbers, we frequently ignore it.

Research shows that people typically react more strongly to specific people who have problems than to statistics about people with problems.

Generalizing based on a small number of samples relative to the overall population is problematic.

Regression toward the mean happens not just in cases where test-takers are guessing. Whether people are taking tests, or performing in sports, music, or any other activity; there are always random factors that affect their performance giving a result that is better or worse than what they are capable of.

If we ignore regression toward the mean, we can make the kinds of inaccurate causal attributions that are known as the regression fallacy.

Making hiring decisions based primarily on interviews is a violation of – now we can use a technical term – the law of large numbers.

Bayes’ theorem is talking about conditional probability. Conditional probability is the probability that something, say A is true, given that or conditional on the fact that some other thing, say B, is true. The probability of A given B is not the same as the probability of B given A.

Just because the probability of a positive result (B) given breast cancer (A) is high, it does not mean that the probability of breast cancer (A) given a positive result (B) is equally high.

To compute the probability of A given B, or P (A/B), from the probability of B given A, or P(B/A), we need to use Bayes’ theorem.

Bayes’ theorem is often used to update an existing theory or belief, A, given new data, B. Bayes’ theorem specifies a rational way to update the belief.

P (A/B) = P(B/A) x P(A) / P(B/A) x P(A) + P(B/not-A) x P (not-A)

P(A) and P (B) mean base rates of A and B. Not-A means the absence of A.

Difficulty does not lie in applying a known solution to a new problem, but in spontaneously retrieving it from memory.

If you are telling a story to make a point, your point will have a greater likelihood of being remembered if you embed it in multiple stories and tell all of them.

Negativity bias

You don’t have to be a super-maximizer to be overly affected by negative information. People weigh negative information more heavily than positive information. Negative events affect our lives more than positive ones. The negativity bias can affect us so severely that it can cause us to make decisions that are blatantly irrational.

Loss aversion is one example of negativity bias.

Given that the negativity bias affects so many different kinds of judgments, it should not surprise anyone that it also affects us when we make decisions involving money.

Tversky and Kahneman presented the insight that we treat the same monetary values differently depending on whether they are gains or losses, which leads to what is known as loss aversion. But be aware, that does not mean we prefer gains over losses. Also, loss aversion is not the same as risk aversion.

Loss aversion can also help explain why buyers and sellers seldom agree on the value of an item when negotiating the price. A very familiar scenario for any transaction of used items; the owner thinks it is more valuable than the buyer does. In behavioral economics, the phenomenon is called the endowment effect.

Even though negativity biases serve a purpose for humans and may still in some situations, they can be harmful when they become extreme.

Two possible strategies for counteracting negativity bias. One for loss aversion and the other one for endowment effect. The most obvious cost of the negativity bias is that it can lead us to make the wrong choices. We can use the framing effect and reframe the question we ask ourselves to be a positive one. The endowment effect can be fought by acting like we gave away everything and now we only take back what we need.

Biased interpretation

Causal imprinting is resisting to change, even if we know the right way. Once someone has been imprinted with the belief that one thing causes another, they will still interpret the patterns like that, even if they found out about additional potential explanations.

This is another example of confirmation bias, our tendency to stick with our preexisting beliefs. The confirmation bias happens because we interpret new data to fit with what we think is the truth. Having a biased interpretation of what we already believe is extremely common.

Smarter people can be even more prone to biased interpretations because they know more ways to explain away the facts that contradict their beliefs. Coming up with excuses to dismiss evidence requires a good amount of analytic thinking skills and background knowledge, like how to collect and analyze data, or why the law of large numbers is important.

Top-down processing is also responsible for biased interpretations, which in turn cause confirmation bias and prejudice. We cannot easily stop the process that gets us in trouble, we need it. Thinking biases are much harder to overcome when we believe that we don’t commit them and that they only plague dense people who aren’t like us at all.

Sometimes the only way to counteract one system is with another – one that is explicitly, equitably, and intentionally designed to protect the greater good.

The dangers of perspective-taking

We communicate with other people all the time. Despite doing this all our lives, we don’t realize how difficult it is.

“Theory of mind” refers to reasoning about what’s in other people’s minds.

That is the curse of knowledge: once you know something, you have trouble fully taking the perspective of someone who doesn’t know it, even if you are an adult. The curse of knowledge makes us overconfident about the transparency of the message we are conveying.

Smart people who know a lot are not necessarily good teachers or coaches, partly because of the curse of knowledge.

Often our communication failures occur simply because we neglect to consider the other person’s perspective.

Cognitive theory of mind provides us with insight, that other people can understand the world differently than we do. Emotional theory of mind: a comprehension of the fact that people can have different feelings, and knowledge of what they are likely to feel under which situations.

Emotional theory of mind – understanding other’s feelings and having compassion toward them – can also be improved by carefully thinking through other people’s circumstances.

The only sure way to know what others know, believe, feel, or think is to ask them.

Because we project our knowledge and feelings onto others, we are confident and believe we know what they think. As a result, we don’t bother, or we forget to verify whether our assumptions are correct.

The trouble with delayed gratification

How we discount future rewards in an irrational manner. We tend to discount the utility of future rewards more often than is justifiable. One example is climate change.

Delay discounting applies not just to discounting future rewards, but also future pains, which explains why we procrastinate.

Parkinson’s law: work expands so as to fill the time available for its completion.

Sometimes we cannot delay gratification because we lack impulse control.

Uncertainty about significant future outcomes can immobilize our decision-making.

There is a famous phenomenon in behavioral economics called the Allais paradox, which occurs because of the certainty bias. Maurice Allais was the 1998 Nobel Prize winner in economics.

The same 1 percent difference feels vastly different when we are comparing 0 percent to 1 percent and 10 percent to 11 percent.

Whenever we are faced with a choice that involves delayed gratification, there is the possibility that our preference for certainty (getting it now) over uncertainty (getting it in the future) will be a factor.

The future simply feels distant. Temporal discounting is why we overcommit ourselves. If we think about the future in as much detail as possible, we can improve our thinking.

A strong desire for self-control showed some benefit for simpler tasks, but the opposite happened with difficult tasks: those with a high desire for self-control performed worse than those with a low desire. It is not a small task to know when to persist and when to quit. If a goal is worth pursuing, even the pain that accompanies our practice feels good. But if you feel like you’re hurting yourself to achieve rewards and the only thing you enjoy is the final goal and not the process, it’s probably time to rethink.

Once we realize there are always multiple possible causes for an event, we can assess credit as well as blame more fairly.

You may also like
Think, Say, Do … people are not 1 or 0 (context and interests)
Rory Sutherland: Alchemy
Ralph L. Keeney: Give Yourself a Nudge
Quality of decision making

Leave a Reply