Jump to content

Fraud, Selective Reporting


ltlredwagon

Recommended Posts

In it's 2016 report on problems with reproducibility, 70% of scientists surveyed by the journal, Nature (https://www.nature.com/news/1-500-scientists-lift-the-lid-on-reproducibility-1.19970), cited "selective reporting" as a significant problem, while about 40% cited "fraud". My definition of "selective reporting" is as it's given here: “when results from scientific research are deliberately not fully or accurately reported in order to suppress negative or undesirable findings  (https://authorservices.taylorandfrancis.com/an-introduction-to-research-integrity-and-selective-reporting-bias/#:~:text=Selective reporting bias is when,suppress negative or undesirable findings.)

Under this definition, is it not fair to state that selective reporting is fraud? 

 

 

Edited by ltlredwagon
clarity
Link to comment
Share on other sites

3 hours ago, ltlredwagon said:

In it's 2016 report on problems with reproducibility, 70% of scientists surveyed by the journal, Nature (https://www.nature.com/news/1-500-scientists-lift-the-lid-on-reproducibility-1.19970), cited "selective reporting" as a significant problem, while about 40% cited "fraud". My definition of "selective reporting" is as it's given here: “when results from scientific research are deliberately not fully or accurately reported in order to suppress negative or undesirable findings  (https://authorservices.taylorandfrancis.com/an-introduction-to-research-integrity-and-selective-reporting-bias/#:~:text=Selective reporting bias is when,suppress negative or undesirable findings.)

Under this definition, is it not fair to state that selective reporting is fraud? 

 

 

It can be, if the full reporting substantially changes the outcome. There are borderline cases which can on either side of the issue. For example, some data sets might be selective due to their nature. Examples include microscopic images which are qualitative in nature (e.g. showing co-localization as a random example). Now if you have taken hundreds or thousands of pictures you generally are unable to provide all of them (and likely, no reviewer would want to go through all of them). So as a consequence you provide images that are supposedly representative. But this criterion can be highly subjective and biased.

In other cases it is not uncommon that difficult experiments need to be repeated fairly often until the assay works (remember that most research is actually done by trainees). So here the issue becomes whether a particular data set that might be just botched should be added to the final analysis or not. Fundamentally there is a big move to have all data sets, including supposedly bad data published, which in principle makes sense. However, it has a lot of practical limitations. 

Link to comment
Share on other sites

Thank you Charon. Then perhaps "selective reporting", as you describe it, is too broad a term, simply the wrong phrase. What then might be a better term for when researchers "suppress negative or undesirable findings"? Is there another term for this specifically, or, in clear cases of this, should we simply avoid any hint of sophistry or euphemism and simply call it “fraud,” in its generally understood sense? (the American Heritage Dict. on my phone says it’s “a deception deliberately practiced in order to secure unfair or unlawful gain.”)

But to your point, if I’m following you, there may be instances of, if you will, honest “selective reporting” done with the intention of not unnecessarily muddying an otherwise clear finding. I’m not familiar with scientists or ethicists using the phrase in this manner, but I’ve not read widely on this at all, so I really don’t know.

Link to comment
Share on other sites

I wonder if part of this is how most funding works. To get your next lot of funding you need to show progress. That normally means you need to have shown positive results. 

If you have tried two different methods, one showed positive results and the other negative, both are valid and useful for the community. But you as a team only have the time or resource to write one paper, do you concentrate on the paper that will help you get funding or the one that won't, possibly with the intention that you'll write that one in the future?

Link to comment
Share on other sites

Good point Klaynos. Money, a reward and a temptation forever it seems. John Ioannidis, in his now famous article (infamous for some) and many others have cited this problem (https://journals.plos.org/plosmedicine/article?id=10.1371/journal.pmed.0020124).

Two different “methods”?  Then show both, if only commenting briefly on the failed method – the glue made this way was sticky for joining paper and wood only, but made this way, with the same ingredients, it was very sticky for paper, plastic, and metal. 

But this does not go to the heart of the selective reporting issue, which in my view is essentially this: did you deceive others, did you lie, using the oldest and hardest-to-discover method around, which we all learned as children, long the stock-in-trade of journalists and politicians – hide information; what is not there “doesn’t exist”.  I believe that would be fraud. In other words, “selective reporting”, as I understand it, is simply fraud. Or my understanding is wrong.

Link to comment
Share on other sites

15 hours ago, ltlredwagon said:

Thank you Charon. Then perhaps "selective reporting", as you describe it, is too broad a term, simply the wrong phrase. What then might be a better term for when researchers "suppress negative or undesirable findings"? Is there another term for this specifically, or, in clear cases of this, should we simply avoid any hint of sophistry or euphemism and simply call it “fraud,” in its generally understood sense? (the American Heritage Dict. on my phone says it’s “a deception deliberately practiced in order to secure unfair or unlawful gain.”)

But to your point, if I’m following you, there may be instances of, if you will, honest “selective reporting” done with the intention of not unnecessarily muddying an otherwise clear finding. I’m not familiar with scientists or ethicists using the phrase in this manner, but I’ve not read widely on this at all, so I really don’t know.

I think the broadest term would be bias of some sorts- the intention might not be willful misrepresentation. To be maybe a bit clearer in an example. If a graduate student performs an experiment, e.g. looking at growth differences between bacterial strains, at the beginning you often have huge variation in the data. This is often caused by mistakes, such as inoculating varying amounts of cells at the beginning, contaminating your sample or making mistakes in media composition. With practice, the variance typically narrows and then one might detect significant differences. If one reports every growth data, the what we consider "better ones" will be drowned out by the rest. Even if you just dump the data without highlighting it in the paper much, the reviewer would have a hard time going through all of them only to come to the conclusion that, yes trainees probably did not did a good job. Given the time constraints that we operate in, it would make the process really cumbersome.

But I do understand the larger point- data scientists are more comfortable in dealing with big data set and at least in theory, if everything is on the table, perhaps not during the peer-review process but at some later point, those folks could extract the data and maybe see other patterns in there. But the really big issue from an experimental viewpoint is that the largest set will be simply low-quality data. If you ask any grad student, they will all tell you that the most significant data usually is generate toward the end of their degree, when they a) built up the skill to perform the experiments reliably and b) figured out all the ways they should not run the experiment.

As you can probably read, I am a bit torn regarding the best way to report complex data. However, I do think that to make a case of fraud it is necessary that the one committing it knows that the form of reporting distorts the findings. I.e. if deliberate deception is involved. Traditionally, we use controls to account for bias (rather than offering all data sets, including those that we deem failed experiments), of course this is not fool-proof, either. From an outside view, it is difficult to tell, of course. There is also the issue that much of it depends ultimately on trust. I trust that my students are reporting data the way they performed it, for example. 

12 hours ago, Klaynos said:

I wonder if part of this is how most funding works. To get your next lot of funding you need to show progress. That normally means you need to have shown positive results. 

If you have tried two different methods, one showed positive results and the other negative, both are valid and useful for the community. But you as a team only have the time or resource to write one paper, do you concentrate on the paper that will help you get funding or the one that won't, possibly with the intention that you'll write that one in the future?

This is certainly an issue and especially the high competition makes it worse. I am not sure whether there are short-term solutions for it. However, one should keep in mind that ultimately the system is (slowly) self-correcting. Obviously if you publish something interesting, but biased, others will have difficulties to build upon that data. Eventually newer findings will indicate that what has been published before is probably not accurate or is missing some key criteria. Cases of outright fraud are often that far off from the base that it could trigger retractions or even more serious sanctions. Biased data on the other hand are often more borderline.

Link to comment
Share on other sites

Thank you, Charon. You’re being very circumspect, and I appreciate that. 40% of the respondents in the Nature survey above cited “fraud”, while 70% cited “selective reporting”. It would be difficult to ascertain how much of this selective reporting was “willful misrepresentation”, which, as you point out, “From an outside view, it is difficult to tell, of course.” I suspect that where the money is big (in phamaceuticals, for example: https://www.nejm.org/doi/full/10.1056/nejmsa065779), selective reporting fraud is not that uncommon. You note, "ultimately the system is (slowly) self-correcting." Well, I suppose that's another thread entirely. Sorry if I’ve belabored the point. I’ll end off. Appreciate your comments and others.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.