Jump to content

Science and the hindsight bias


ecoli

Recommended Posts

*note - I'm plagiarizing some stuff in this article for the purposes of continuity and that nobody peaks ahead. The source is posted at the end.

 

An experiment on social research of WWII soldiers discovered the following (items in parenthesis is the explanation provided):

 

While you read them, ask yourself if you think the result was expected. Then ask yourself if that's the outcome you would have predicted from such a study, before knowing the result.

 

1. Better educated soldiers suffered more adjustment problems than less educated soldiers. (Intellectuals were less prepared for battle stresses than street-smart people.)

 

2. Southern soldiers coped better with the hot South Sea Island climate than Northern soldiers. (Southerners are more accustomed to hot weather.)

White privates were more eager to be promoted to noncommissioned officers than Black privates. (Years of oppression take a toll on achievement motivation.)

 

3. Southern Blacks preferred Southern to Northern White officers (because Southern officers were more experienced and skilled in interacting with Blacks).

 

4. As long as the fighting continued, soldiers were more eager to return home than after the war ended. (During the fighting, soldiers knew they were in mortal danger.)

 

 

One of the most common pseudo-criticism I hear about scientific research results (esp in the social and softer sciences) is that results are not all that interesting, because they are common sense and expected (and it is therefore justifiable to label science as useless). How many times have you heard: "Well of course cigarettes are bad for, I don't need science to tell me that!"

 

The above example is something along these lines. The conclusions of this simple social experiment are obvious and not unexpected. Don't you think the conclusions fit neatly into a reasonable persons model and the outcomes are exactly as anyone would predict?

 

Think again!

 

I've actually tricked you here. In the actual experiment, the outcomes were exactly the opposite of the ones detailed here, and the explanations are rationales of someone retrofitting a model onto (in this case) fake conclusions.

 

The hindsight bias is that which causes us to think, "ah, I knew it all along!" It's the reason why we're not impressed by scientific results, political information, the behavior of financial markets and all sorts of things. Reality never surprises us because, unlike in science, we don't test our mental models of the world. Instead, we tend to observe outcomes and use our existing models to explain those outcomes.

 

Does the woman float on water? She must be a witch. If she sinks? She must be a witch who's hiding her witch-ness. Either way, my hypothesis that this woman is a witch must be correct... I knew it all along that she would sink!

 

I ask you to check yourself for the hindsight bias. What type of evidence is called for to make true observations and hypothesis testing? Are you retrofitting your model onto the evidence, or did you make an accurate prediction about outcomes?

 

Make a prediction about what evidence you would expect to see as a consequence of your model and hypothesis. Make a prediction about what counterevidence would you expect to observe that would cause you to revise or reject your model. If you observe counterevidence, are you actually willing to revise your model?

 

The hindsight bias fits in with an individual's neat narrative about the world, because no matter what you observe, its what you'd expect. There's no reason to change your narrative as long as its possible to rationalize.

 

As scientists, it's important to avoid this in science and in life.

 

For the last few month or so, I've been a proponent of using an informal Bayesian test to callibrate yourself against this hindsight bias. Make a prediction (or retrodiction) and assign a probability estimate of it coming true (and when you expect to see the evidence). If those predictions come true at the same rate as your probability estimate, congratulations! you are well calibrated! (example, if I make 10 predictions at a 90% confidence rate, I expect 9 of those predictions to come true).

 

For things I am very confident about, but get wrong, that's strong evidence that there's a flaw in my model. If I'm honest and open about my predictions before observing the evidence, its a lot more different for the hindsight bias to kick in and rationalize away the fact that my prediction is wrong.

 

source: http://lesswrong.com/lw/im/hindsight_devalues_science/

 

I'd appreciate thoughts and amendments to these ideas!

Link to comment
Share on other sites

Well, I got 1 wrong, 2 and 3 I didn't consider as I'm not a yank, and I thought 4 was garbage. The men were more interested in getting home after the war was over because the job was done.

 

One of the most common pseudo-criticism I hear about scientific research results (esp in the social and softer sciences) is that results are not all that interesting, because they are common sense and expected (and it is therefore justifiable to label science as useless).

 

Sometimes it is correct though. I remember research some years ago where it was decided that happy people are less likely to commit suicide. Did we really need research to tell us that?

 

But I'm not the only one who keeps track...

Wired Science.

Obese people have trouble using seatbelts? Really? I wonder why....

Physically active parents tend to raise physically active children? Who'da thunk it?

Young Britons often have sex with strangers while on vacation? No, that can't be right.

 

Fox News.

High Heels Lead to Foot Pain. You're kidding....

Men Much More Interested Than Women in Casual Sex. I'm sure that was news to the ladies.

Kids' TV Is Full of Ads for High-Fat and High-Sugar Foods. Surely not, kids TV is full of ads for fruit and healthy food, aren't they?

 

Times Online

Adolescents might get bullied because of the way they look. And if they do, it can undermine their self-esteem. That can't be right, kids are always nice to each other.

Employees who take long spells of sick leave have a higher risk of death than those who do not. You mean people who are often sick for extended periods will probably die earlier than people who are healthy?

 

But to take the cake;

But if all this scientific research falls flat, it cannot outdo Mark Fonstad and William Pugatch, of Texas State University, and Brandon Vogt, of Arizona State University, who demonstrated that Kansas is flatter than a pancake. If a mathematical value of one indicated perfect flatness, they calculated, “after many hours of programming work”, that Kansas’s flatness approximated to 0.9997. Pretty damn flat.

Like, who cares?

 

The Telegraph (UK)

Some people don't like learning online. But some do. What, people are different?

Those who use prescription medications without a prescription were more likely to have drug abuse problems. I would have thought that using prescription drugs without a prescription would have been, I don't know, part of the bloody definition of drug abuse.

 

Try Googling for "Obvious science", 32,600,000 results.

 

I do take your point about hindsight, it does confirm things. There is another side though, some things are obvious if you think about them. Normally, you just don't think about them.

 

Take obese people and seatbelts. Most people, it just doesn't cross their mind, but if you asked the question "Do you think an obese person would have trouble doing up a seatbelt?", they would think for a few seconds and say "Yes".

 

Much that is declared "Science of the Bleeding Obvious" or "No Sh*t Sherlock Science" isn't so much "Well, everybody knows that" as "Well, 10 seconds of rational thought could have told you that".

 

Don't forget that some "Sciences" suffer from the same thinking as your witch example:

Psych: "You're suffering from blah, blah"

Patient: "No, I'm not."

Psych: "Ah, you're suffering from blah, blah and Denial."

 

The way "Repressed Memory Syndrome" was handled in Australia sort of soured me on Psychologists. The bad ones used it and the good ones said nothing. My opinion has not been improved by the push to have "Disagreeing with the Psychologist" classified as a clinical symptom of "Denial".

 

Denying you have denial is a symptom of denial? It's like something out of "Life of Brian", you can't win no matter what you say.:D

 

Cheers.

Link to comment
Share on other sites

The hindsight bias fits in with an individual's neat narrative about the world, because no matter what you observe, its what you'd expect. There's no reason to change your narrative as long as its possible to rationalize.

Well, duh. That's what I've thought all along.

 

 

 

 

:D

Link to comment
Share on other sites

@JohnB - You're demonstrating my point though. It's easy to say in hindsight what results were obvious. In cases where you predict beforehand what the conclusion might be and you're correct, you deserve a pat on the back. Claiming that a result seems obvious after the fact shouldn't be impressive, even though it seems obvious (how many results seem counterintuitive initially, because you're using the wrong model?)

 

But people aren't honest about their priors, because of the hindsight bias. They pretend that they're model is perfect in retrospect.

 

There are things that simply aren't worth spending enough time and money to know, but that's something different.


Merged post follows:

Consecutive posts merged
Well, duh. That's what I've thought all along.

 

What's your prediction about how often you fall into the hindsight bias/heuristic?

Link to comment
Share on other sites

ecoli, sorry but I don't see it.

 

Any man that dates women knows that the first thing she takes off at home are her heels. (And often asks for a foot rub) He's heard her complain about the sore feet on many occasions. Then a study comes along and tells the world "Wearing heels causes sore feet!". It's not hindsight bias, it's a study that simply confirms what 90%+ of either men or women would have told the researchers if they had asked.

 

This is not to say that hindsight bias doesn't exist, but to lump everything into it is, I think, a problem.

 

To me, "Obvious" science falls into 3 areas.

 

1. The results are obvious to most people. These are going to sociological studies (like the heels one) rather than "straight" science. They get their results by asking people questions. It's basically a poll, "Do your feet hurt more after wearing A, B, or C?"

 

Since it is the experience of every woman and every man who has dated a woman that heels cause sore feet, why should the results be a surprise?

 

2. The results are obvious if you think about it. Obese people and seatbelts. As I said earlier, most people just don't think about it. However, when asked they would reply "Yes". Again, not hindsight bias, simply an extension of a general rule. In this case the general form of the question is "Is it hard to fit a large object into a medium sized container?"

 

Since the answer to the general question is "Yes", then all the study is doing is looking at a specific case of the general rule.

 

3. Actual hindsight bias.

 

Because right now it looks like "hindsight bias" is the excuse being used to justify bleedingly obvious science. "No, it wasn't obvious Mr. Grant Commissioner, it really was a subject worthy of your money. It's just that people are suffering from hindsight bias." IOW, it's not that my science was garbage, there is something wrong with the people who read the reports. Transference, anybody?:D

 

An easy way to work this out would be to get a list of sociological etc studies that are about to be done and get a set of predictions of what the studies will find.

 

Actually, I think it would be an interesting experiment in it's own right. Get the subject question of the study boiled down to something simple. eg "The connection between the eating habits of parents and their chidren."

 

(I'd suggest at least 20 upcoming studies.) Then get a group (say 30 people) not connected to the research to write a quick paragraph about what they think the study will find.

 

An interesting discussion, either way.


Merged post follows:

Consecutive posts merged

A thought that has recently occurred. Is the concept of "hindsight bias" a rationalization for some researchers to avoid facing the fact that their research is actually useless or obvious?

 

It allows them to classify their own work into one of two possible outcomes.

 

1. My research is new, useful and valuable and is accepted as such.

 

2. My research is new, useful and valuable but is not accepted as such because of hindsight bias.

 

Note that both outcomes reinforce the researchers attitude towards his own work, (who doesn't want to feel that their work is valuable?) while transferring any negative connotations onto the person commenting on the work.

 

The hindsight bias fits in with an individual's neat narrative about the world, because no matter what you observe, its what you'd expect. There's no reason to change your narrative as long as its possible to rationalize.

 

Your statement can cut both ways.>:D

Link to comment
Share on other sites

There is a difference between rational and empirical science, with empirical preferred by most life sciences. One can often rationally infer the final results of studies; hindsight bias, but since some branches of sciences are not fully rational, the conclusions will be tested empirically. In my opinion, this procedure could be a long series of test balloons, leading to the day when there is a system wide transition into rational. After enough empirical tests of logic are confirmed, logic will be considered rational.

 

In defense of empirical, not all rational is created equal, which is why empirical is often the safer path, when dealing with life and with humans. Reason is suppose to be cold, without emotion; Mr Spock. Once you add emotion to reason, the premises can become fuzzy, and the lines of reason curved, allowing rational special effects. One class of rational-emotonal special effect has to do with the x,y axis (cause and effect axis) of reason distorting from perpendicular because of emotion, allowing one to make what appears to be square boxes (but on curved axis). If we straighten the axis (with no emotion), the square looks distorted. The trick is keep the emotions induced so the axis stays distorted.

 

For example, political rhetoric is there to induce emotion. "It would be nice if all people lived in peace". That makes you feel warm and fuzzy. Once that emotional is induced, one can reason using the emotional-rational special effects. "To assure peace we will plant a chip in everyone's brain to monitor behavior, then we will have peace". It may sound reasonable to anyone under the emotional ambiance.

 

This emotional logic will often need the unbiased nature of an empirical study to show the conclusions are not real cause and effect. Some people will avoid the emotional buzz of the rhetoric, so they can begin with un-curved axis for cause and effect. They may come to the cold conclusion, that will be called hindsight bias stemming from an empirical study.

 

Don't get me wrong, there is a place for emotional logic, such as in dating, where emotions help inflate cause and effect. It is good in marketing using emotion to distort cause and effect and make that burger better. But this should not be part of science, which is often better off using empirical, when emotional logic is the only other choice. This is often the case dealing with humans and the unknowns of life.

Edited by pioneer
Link to comment
Share on other sites

"Wearing heels causes sore feet!". It's not hindsight bias, it's a study that simply confirms what 90%+ of either men or women would have told the researchers if they had asked.

 

This is not to say that hindsight bias doesn't exist, but to lump everything into it is, I think, a problem.

 

I'm not saying that people can't have correct ideas in hindsight. The bias is that people tend to think of themselves as always being correct, in hindsight, even when they weren't.

 

For example, if everyday the woman takes of her shoes and thinks to herself "wearing heels causes sore feet." One day, a scientist does a study and finds that, it's not that having high heels that cause soreness, but the narrow shape that most heels tend to have. The woman thinks to herself, "exactly what I thought... most heeled shoes have poorly designed shapes."

 

The bias is the people's tendency to think that there were right all along, even if they weren't. We pat ourselves on the back when we do happen to be right, and pretend we were right all along even when we aren't.

 

 

To me, "Obvious" science falls into 3 areas.

 

1. The results are obvious to most people. These are going to sociological studies (like the heels one) rather than "straight" science. They get their results by asking people questions. It's basically a poll, "Do your feet hurt more after wearing A, B, or C?"

 

Since it is the experience of every woman and every man who has dated a woman that heels cause sore feet, why should the results be a surprise?

Again, the hindsight bias is not that the results of studies and surveys shouldn't seem intuitive. In fact, the hallmark of a good model, IMO, is that it provides clarity and common sense when you look at the evidence. A good model should look obvious in hindsight.

 

What the hindsight bias states is that people often fool themselves (through no fault of their own) into thinking that they could have necessarily predicted the observed evidence. But, since they rarely state a prediction out loud before observing the evidence, how can they be so sure that their prediction would have been correct?

 

2. The results are obvious if you think about it. Obese people and seatbelts. As I said earlier, most people just don't think about it. However, when asked they would reply "Yes". Again, not hindsight bias, simply an extension of a general rule. In this case the general form of the question is "Is it hard to fit a large object into a medium sized container?"

 

Since the answer to the general question is "Yes", then all the study is doing is looking at a specific case of the general rule.

 

It's a hindsight bias if your prediction fails, if you bothered to make one, but accept the actual evidence as fitting to your model anyway.

 

In most simple cases, people do have fairly good models. An elephant most likely won't fit into a bread box. But if that elephant does happen to fit: "well of course it was going to fit, I knew all along that we were dealing with a bread box with extra dimensional space."

 

 

 

An easy way to work this out would be to get a list of sociological etc studies that are about to be done and get a set of predictions of what the studies will find.

This would be a great study for cognitive psychology.

 

A thought that has recently occurred. Is the concept of "hindsight bias" a rationalization for some researchers to avoid facing the fact that their research is actually useless or obvious?

 

It can be, I suppose, but it's not meant to be.

 

It allows them to classify their own work into one of two possible outcomes.

 

1. My research is new, useful and valuable and is accepted as such.

 

2. My research is new, useful and valuable but is not accepted as such because of hindsight bias.

 

Note that both outcomes reinforce the researchers attitude towards his own work, (who doesn't want to feel that their work is valuable?) while transferring any negative connotations onto the person commenting on the work.

 

The second point can fail, however, when people do happen to be well calibrated in that particular subject. This is why active prediction making is so important. Clearly, despite the hindsight bias, there is probably a lot of studies which we couldn't justify the cost of the study because the conclusions just wouldn't be worth knowing.

 

"But you'd only know that the conclusions weren't worth knowing only in hindsight"

 

Well that's not strictly true! In many cases, we do have good predictive power.

 

 

We can test statement 2, which is what I tried to do in the first post. If results and fake results that feel intuitive either way produce the same level of confidence when presented to a test group, then you know the hindsight bias is playing a role.

 

This can be done on a meta-level as well to the researchers invoking the hindsight bias to justify their 'obvious research.' Ask them how large a role they expect the hindsight bias to play when people hear their results. If hindsight playing or not playing a role doesn't seem to surprise them in either case, their letting their own biases justify their research.

 

 

There are still mundane cases of people having a hindsight bias observing a particular result, but that still might not justify the research effort spent on that result. People will always have incorrect models about things. The trick is to getting them to stop seeming so damn certain all the time.

 

Calibration tests help, I think.


Merged post follows:

Consecutive posts merged

 

For example, political rhetoric is there to induce emotion. "It would be nice if all people lived in peace". That makes you feel warm and fuzzy. Once that emotional is induced, one can reason using the emotional-rational special effects. "To assure peace we will plant a chip in everyone's brain to monitor behavior, then we will have peace". It may sound reasonable to anyone under the emotional ambiance.

 

There are two parts to rationality; epistemic and instrumental. Epistemic is having correct beliefs about the world (what we use science to discover) and instrumental is knowing how to effectively achieve your goals.

 

"It would be nice if all people lived in peace" can be correct epistemologically, but planting a chip in the brain is irrational in terms of instrumental procedure in order to obtain this correct goal.

 

As it happens, Spock is a terrible rationalist who is not well calibrated to the universe. He thinks he knows about the odds to impossible precision and, when aboard the USS Enterprise, he is always wrong. Someone better calibrated should realize that when aboard the Enterprise, different rules apply and adjust the odds accordingly.

 

Feeling emotions are logical because they help us to be instrumentally rational. Knowing how to effectively pursue emotional preferences that make us happy is rational, as long as we do it effectively. These preferences don't have to be conscious or obvious (such as pursuing a girl) but that doesn't make them less rational to pursue. ie - Feeling good about ourselves, emotionally, is rational. Stumbling around blindly in the dark trying to find happiness is not.


Merged post follows:

Consecutive posts merged

 

This emotional logic will often need the unbiased nature of an empirical study to show the conclusions are not real cause and effect. Some people will avoid the emotional buzz of the rhetoric, so they can begin with un-curved axis for cause and effect. They may come to the cold conclusion, that will be called hindsight bias stemming from an empirical study.

 

Can you expand this point a bit? I don't think I follow you

Edited by ecoli
Consecutive posts merged.
Link to comment
Share on other sites

Whilst I agree completely with the OP about hindsight bias and that it should be avoided in obtaining scientific evidence, I would like to open up the debate to the use of hypotheses in science. During my post-doctoral, and pre-doctoral work, I don't recall making a single unbiased, non value-laden hypothesis.

 

In short, all my guesses on what was going on depended on what other people were doing in related fields of study, not completely independently and objectively.

 

Now, if other researchers are also using value-laden hypothesis and then hindsight bias, then scientific research becomes less objective, more subjective, but...more human than we realise. And perhaps that is not a bad thing.

Link to comment
Share on other sites

Of course there are plenty of obvious results from science. Oh, and a few counter-intuitive ones too. Its always been traditional for scientists to occasionally test the obvious things to make sure they aren't one of the counter-intuitive ones.

Link to comment
Share on other sites

Of course there are plenty of obvious results from science. Oh, and a few counter-intuitive ones too. Its always been traditional for scientists to occasionally test the obvious things to make sure they aren't one of the counter-intuitive ones.

The hindsight bias is claiming that the results are intuitive, no matter what they are.


Merged post follows:

Consecutive posts merged
Whilst I agree completely with the OP about hindsight bias and that it should be avoided in obtaining scientific evidence, I would like to open up the debate to the use of hypotheses in science. During my post-doctoral, and pre-doctoral work, I don't recall making a single unbiased, non value-laden hypothesis.

 

The point is not that unbiased observations or models are possible (they don't seem to be) but to make sure we correctly update our probability estimates about our hypothesis based on the evidence.

 

In Bayesian terms, the probability the hypothesis is correct given the evidence and the probability the evidence can be predicted by the hypothesis.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.