Jump to content

Hmm...Fake Peer-Reviewer?


DrmDoc

Recommended Posts

According to this Washington Post article, Scientific World Journal has uncovered an imposter amongst its reviewers :ph34r:. Posing as French engineering professor Xavier Delorme, the imposter managed to review or edit three papers before the caper was uncovered. This, according to the article, comes on the heels of a 2014 exposure of peer-review fraud involving some 60 papers. Shocking!

Edited by DrmDoc
Link to comment
Share on other sites

I know how important the peer-review process is to our scientific endeavors and I agree with it; however, I don't necessarily accept the results of this process without a critical analysis of my own. I'm pretty sure most of us here engage in a similar analysis when reviewing research papers of consequence in our particular field of interest. I think the peer-review status only gives an article some initial credibility that we should later verify through our independent efforts where possible.

Link to comment
Share on other sites

Illuminating, I was not aware of the Sokal Affair. Although Sokal's experiment proved a presumption, I think it argues for the academic rigors of an independent peer-review process, which the targeted journal, Social Text, did not practice at the time. Unfortunately, I think his experiment also reveals one illicit path to getting published--tailoring one's submissions to the ego and philosophies of the publisher.

Link to comment
Share on other sites

I know how important the peer-review process is to our scientific endeavors and I agree with it; however, I don't necessarily accept the results of this process without a critical analysis of my own. I'm pretty sure most of us here engage in a similar analysis when reviewing research papers of consequence in our particular field of interest. I think the peer-review status only gives an article some initial credibility that we should later verify through our independent efforts where possible.

 

I think very few researchers accept papers in their area of expertise at face value. It is generally accepted that peer-review is a relatively low bar that indicates that it has roughly the qualities that makes it worthwhile to scrutinize and utilize further.

 

Edit: I should add that even in hard science and even in reputable journals every now and then papers go through that clearly shouldn't. This includes at least one molecular biological paper that promoted creationism in a very weird way. It was subsequently retracted when other researchers pointed out how stupid that was, but it somehow still passed peer-review.

Edited by CharonY
Link to comment
Share on other sites

 

I think very few researchers accept papers in their area of expertise at face value. It is generally accepted that peer-review is a relatively low bar that indicates that it has roughly the qualities that makes it worthwhile to scrutinize and utilize further.

 

Edit: I should add that even in hard science and even in reputable journals every now and then papers go through that clearly shouldn't. This includes at least one molecular biological paper that promoted creationism in a very weird way. It was subsequently retracted when other researchers pointed out how stupid that was, but it somehow still passed peer-review.

 

Indeed; a low bar held, it seems, in diminishingly low esteem except by those publishers whose revenue depends on access to reputable articles.

Link to comment
Share on other sites

I think the peer-review status only gives an article some initial credibility that we should later verify through our independent efforts where possible.

This is exactly how one should view journal articles. Over the years many articles that are not so good get published. I take publication to just mean that the paper is orginal and of a reasonable standard, but that does not mean they are right on everything!

Link to comment
Share on other sites

This is exactly how one should view journal articles. Over the years many articles that are not so good get published. I take publication to just mean that the paper is orginal and of a reasonable standard, but that does not mean they are right on everything!

 

I agree. Time and again I've seen the peer-reviewed status of a paper's results held sacrosanct against verifiable, commonsense analysis and contrary evidence.

Link to comment
Share on other sites

  • 3 months later...

In addition to fake peer-reviewers, it seems we have false research to worry about. This recent Veritasium video explores the possibility of most published research being wrong. No wonder I never heard of a Pentaquark and I knew eating chocolate was never a good weight loss regime. Enjoy!

Edited by DrmDoc
Link to comment
Share on other sites

In addition to fake peer-reviewers, it seems we have false research to worry about. This recent Veritasium video explores the possibility of most published research being wrong. No wonder I never heard of a Pentaquark and I knew eating chocolate was never a good weight loss regime. Enjoy!

 

I wonder if it is a good idea to "integrity" test scientists like cops.

 

RANDOM INTEGRITY TESTING:

Random integrity tests are designed to observe and evaluate an officer’s conduct in situations in which a specific set of circumstances has been created that requires police intervention. Several major police departments, including the Los Angeles Police Department (LAPD), the New York City Police Department (NYPD), and the New Orleans Police Department (NOPD) routinely conduct random integrity tests of their officers to determine if their conduct in handling their official duties is appropriate.

There exist two schools of thought on this type of orchestrated integrity test. Many consider such action distasteful and unnecessary. Others argue that this type of testing is necessary to ensure that law enforcement officers do not abuse their powers and that the random testing of officers is a legitimate and necessary safeguard in maintaining integrity in a police organization. The purpose of this article is not to resolve the tension between these competing points of view, but instead to provoke discussion of the issues surrounding this process.

In the 1970’s, ABC News conducted an integrity test in Miami, where 31 wallets containing money and identification were turned over by role players to 31 police officers. Nine of the officers kept the money and were subsequently fired and/or prosecuted.

 

http://llrmi.com/articles/legal_update/le_integrity_tests.shtml

 

In my opinion, as a former research scientist (Biochemistry, Immunology, Molecular Genetics) I was under pressure to publish and put myself in a better position for a new short-term contract. I would, under normal conditions, prefer those experiments which supported my hypotheses rather than those that negated them. Of course my emphasis was on repeatability and reliability and I would explore and then mention anomalies whenever I saw them. I was a young idealist and would honestly report my results like the vast majority of scientists with whom I collaborated or communicated. Only one of my papers negated a hypothesis rather than supporting it but I did feel that more experiments should disprove hypotheses.

 

Not all research needs statistical proof, taking away one of the premises from the video. However, there is a need for researchers to rigorously test out the experiments of other researchers in their field. Confirming the findings of someone in your small area of a researcher (as a PhD or post-doc) is an excellent form of peer review. If the findings are not reproducible, then more papers should be retracted as a consequence. It is just ethical behaviour in my opinion.

Link to comment
Share on other sites

Integrity tests are an interesting idea but who would administer them and how might they be implemented across the various disciplines? Most certainly not, I think, by the institutions funding and pursuing research. They are invested in a process where negative results could affect future funding and institutional prestige. I agree that the most effective solution is independent research verification by reproduction but I also acknowledge that there are few incentives to do so. Perhaps the only solution here is to incentivize the process of independent verification, which excludes the sponsoring institution.

Edited by DrmDoc
Link to comment
Share on other sites

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.