How base rate bias works

Cognitive biases are a very popular Psychology topic. I find them especially interesting because, in many cases, knowing about them and correctly identifying when we use them, can help us think more rationally and make better decisions.

Today I wanted to write about a type of bias that we often find in debates or opinionated conversations; the base rate bias or base rate fallacy.

The base rate bias occurs when base rate information is ignored and specific information -information relating to a certain case- is favored to make a judgment or reach a conclusion. Base rate information refers to the base probability of an event -also known as prior probability.

I already talked about heuristics and cognitive biases and about how we still lack a coherent classification for them. Daniel Kahneman [1] considers base rate bias a specific form of extension neglect. Extension neglect occurs when the size of a set that is relevant to its valuation is disregarded.

The best way to understand this concept is by going over a couple of examples. The first one is a classic one and has been replicated many times with similar results. A group of participants is given the description of a fictional university student chosen at random. Consider Tom’s description:

“Tom is of high intelligence, although lacking in true creativity. He has a need for order and clarity, and for neat and tidy systems in which every detail finds its appropriate place. His writing is rather dull and mechanical. He has a strong drive for competence. He seems to have little feel and little sympathy for other people and does not enjoy interacting with others.” [2]

When asked about what program is Tom most likely to be attending, most people would reply ‘computational science’ or any other engineering program when, in fact, business and social science students are larger in number.

If Tom was chosen at random, he is more likely to be attending one of the most popular studies. Most people ignore this base rate information and reach a conclusion based on the fact that personality types like Tom seem more abundant in computational science programs.

Base rate bias and diagnosis



Another example that is often discussed is the base rate fallacy in diagnosis or assessing probabilities. Let’s assume that a certain disease manifests in 1 out of 1000 people, that means that a person has 0.10% chance of having that disease. There is a medical test that identifies 99% of positive cases when used on a sample of people who have the disease. That means that only 1% of positive cases are not diagnosed. Similarly, the test clears 99% of patients who do not have the disease and mistakenly diagnoses 1% of healthy individuals (false positive).

What is the probability of having the disease if the result of the test comes back positive? Not only patients but also most doctors would struggle to give a correct answer.

Making use of Bayesian statistics we can calculate the conditional probability:

p(disease | positive result) = p(disease) * p(positive result | disease) /  p(positive result)

p(disease | positive result) = 0.001 * 0.99 / 0.001 * 0.99 + (1-0.001)*0.01 = 0.090 … 9%

This means that the probability of having the disease it is still 9% even with a positive result on the test. The reason being that the disease is very rare and only 0.10% of the population will have it.

Without making these calculations and considering the 99% of correct diagnosis, it is easy to think that the probability of being sick is much higher. This is why it is important to keep in mind the base rate information.



Sources:
[2] Kahneman, D., Slovic, P., & Tversky, A.E. 1982. Judgment under Uncertainty: Heuristics and Biases. Cambridge University Press.

Comments

Popular Posts