Jump to content

Peer review flawed?


Recommended Posts

There has to be more to this but I am not familiar enough to figure it out. Does anyone have info?

 

http://www.sciencemag.org/content/342/6154/60.full

 

Who's Afraid of Peer Review?

A spoof paper concocted by Science reveals little or no scrutiny at many open-access journals.

scicomm-minitoc.gif

On 4 July, good news arrived in the inbox of Ocorrafoo Cobange, a biologist at the Wassee Institute of Medicine in Asmara. It was the official letter of acceptance for a paper he had submitted 2 months earlier to the Journal of Natural Pharmaceuticals, describing the anticancer properties of a chemical that Cobange had extracted from a lichen.

In fact, it should have been promptly rejected. Any reviewer with more than a high-school knowledge of chemistry and the ability to understand a basic data plot should have spotted the paper's short-comings immediately. Its experiments are so hopelessly flawed that the results are meaningless.

 

Link to post
Share on other sites

It's not that peer review is flawed, but that peer review wasn't even done with most of the journals that Bohannon tested.

 

 

As other commenters have pointed out, it's not a problem specific to open access journals, even though they're the only ones tested. There is clearly a problem with journals not doing peer review; there is no data given on closed journals. It is probably safer to propose that open access journals make it easier to submit using fake credentials, than that problems with not doing peer review are specific to open access journals.

Link to post
Share on other sites

But, this is a serious issue that needs to be addressed more fully. It ties directly into disucssions like this http://www.scienceforums.net/topic/78832-how-does-an-ordinary-person-know-whats-mainstream/ on how is any lay person supposed to know what is truly peer reviewed and what isn't. Not only that, but the average lay person probably has no idea was peer reviewed even really means.

 

Another valid question is, is the above a root cause or a symptom of a greater problem? "Publish or perish" has been an overriding truth for many academics for quite some time now. And it is being taken to extremes in some places in the world, such as China, where there is a burgeoning fake paper industry developing. http://www.americanscientist.org/science/pub/-472

 

That is, the pressure to increase one's publication numbers is so high, that people are willing to buy fake or plagiarized papers that are written in apparently a good enough way to actually be published.

 

So the question is: are journals' review processes weak because the papers they get are so weak, or are they getting weak papers because their review process is known to be weak? Some combination of both I suspect, and really it is the entire scientific community that gets hurt. The more and more of these issues are found, the more it lends credence to the people who decry science as a conspiracy to defend evolution, global warming, relativity, etc.

Link to post
Share on other sites

In my opinion, peer review is still the best tool science has for vetting papers for factual content and arriving at the proper conclusions. The problem isn't peer review, it's applying it in a rigorous and unbiased enough fashion, like it wasn't in the above example. For now, at least, there is no substitute.

Link to post
Share on other sites

I think the issue with fake journals is less an issue within scientific communities, as bad reputation easily transpires throughout the community, but for laymen and journalists it may pose a problem. However, there are numerous crank websites and fake journals that are partisan, funded by think tanks, or outright crooks. The only think one could do as a scientist is to be outspoken and over time it usually becomes quite clear when a journal (even if it seems reputable) fails scientific standards. At least in the natural science data eventually will trump any agenda one might have.

That being said, publication of substandard papers are on the rise. Publish and perish, together with fewer faculty positions and more graduates are certainly one if not the most important contributing factor. Considering how cut-throat and competitive academia is, I am not sure whether there are easy fixes.

Some are arguing that the editorial process has to be more rigorous and only let high-quality work pass, but that has several drawbacks. First, it will require significant more work and for cutting-edge experimental research it is often not feasible. One thing that some are calling for is the release of raw data and ask reviewers to validate the calculations. Yet, considering that reviewing is a free community service and the fact that as a researcher you are strapped for time as it is, I do not see that happening except in some rare cases. In addition, this could also result in otherwise interesting papers being rejected.

 

Another school of thought is that the review process should, similar to plos One, just see if the conclusions are sound and let citations take care of the relevance. Even if referees miss things, over time the citations for a flawed conclusion will degrade.

 

In the end, as has already been mentioned, there is really no viable alternative.

Edited by CharonY
Link to post
Share on other sites

Sounds like a vanity press issue. Universities will probably start asking for quality assurance reviews of the journals. You'll have to be published within that subset for your paper to be recognized.

Edited by Endy0816
Link to post
Share on other sites

Remember also that peer review does not end once a paper has been accepted by a Journal. Once published, the entire scientific community in the relevant field is free to - and usually does - attempt to replicate and build on the findings of the paper, if the data are sufficiently interesting to warrant further investigation. False positives will quickly come to light in this manner, and may be openly challenged in other papers. This second fail-safe mechanism incurs a cost: both in terms of time and (possibly) if a researcher bases a grant application upon the data - presumably, though, if a grant rests absolutely on a given premise that is not well established, then (s)he will first attempt to replicate the previous data before proceeding to study it any further. Peer review is not perfect but what is the alternative? Perhaps a better question to ask is, how can we educate researchers to think more critically? Maybe modify degree course content to include less rote learning of scientific facts and more critical thinking skills? (Actually, this modification ought to be made at all levels of education).

Edited by Tridimity
Link to post
Share on other sites

It is quite shocking to read the extent of this fake peer-reviewed journals. In certain cases, an educated scientist must be able to find out whether an article is flawed or not, but not always. It is worrying that the mainstream publishers (for me the most important is Elsevier) actually own some of these useless papers - which means that those rubbish results will also show up in their search engines!

 

I think that publishers of scientific articles should be non-profit institutes. Not state-owned, but just an independent institute without that desire for profit.

 

Unfortunately, in addition to these journals that will practically publish anything, there is a second trend that is just as worrying (in my opinion). Renowned institutes have targets for professors and students to publish. So, even when a research has essentially failed to yield publishable results (which happens quite often), the research group is pressured to produce something that can be published. This means that sometimes conclusions are extrapolated or a subsection of the research is inflated to become the main issue.

 

And if the research was successful, it is not uncommon for research groups to attempt to cut up the research into smaller chunks, just to increase the number of publications.

Link to post
Share on other sites

I think the result is driven by three main issues in academic science:

 

1) Publish or perish. Even in my relatively short time in academia, the number of papers a scientist is expected to produce per year has increased significantly - also, as I moved from mid to top tier institutions, the level of output expected increased. The sentiment of "you don't need to get it perfect, you need to get it done" is definitely a factor in scientific publishing, which definitely affects quality.

 

2) Journals competing on turnaround times. Journals compete to be the fastest to turn around papers. While some consequences, such as the removal of "major revisions" from a reviewer's options and its replacement with "reject, resubmit" are are simply annoyances, I believe that editors are under pressure to sign up reviewers faster, which can lead to papers being sent to reviewers who aren't working in relevant fields. Reviewers, particularly in their early careers like me are under pressure to accept, both to stay in good favour with editors who may be revieweing grants/papers/job applications and also to boost our CV's with scientific services. I've certainly been sent papers outside of my area of expertise to review, and had to say no.

 

3) Unscrupulous open access publishers. While open acess science has many benefits, it also forces scientists to "pay to play" - however you only pay when yopur article is accepted. A less than virtuous publisher can take advantage of the pressure scientists are under, make a tokenistic or no attempt to peer review the paper, and take the scientist's money when the work gets inevitably accepted.

 

This may sound elitist, but I also think that the problem is being substantially exacerbated by the huge groundswell of scientists coming from developing economies where there isn't the same culture of integrity in the profession, and practices like writing your own recommendation letters are commonplace, and the intense competition that having a huge number of people trying to break out of domestic circles into the global scientific community creates.

Link to post
Share on other sites

There is a difference of opinion concerning problems with science journals and the peer review process. On one hand no matter how good a paper is or how much real discovery or insight it provides the paper will very often be summarily rejected with a form letter without review, if the subject and theme of the paper is contrary to mainstream theory within the area of focus of the journal. For such non-mainstream proposals, lesser known journals or non-mainstream journals are needed, and should be used. This also may mean far less readers for the paper and that real discovery by the mainstream will not be realized by the paper. On the other hand if one's paper is a minor variation or proposed confirmation of a mainstream proposal and properly documented, it may be accepted for publication but not well read because the implications of the variation of theory may be too speculative or trivial. The referee may regard the paper as not be controversial or objectionable to journal readers and let the paper go without serious review or consideration.

 

The present system is not good but I think it is difficult to produce a better system.

Link to post
Share on other sites

It's funny, I used to think a few years back that the peer review process involved the peer reviewers attempting to replicate, in their own Labs, the data presented in each and every manuscript submitted to them for review. How naive. huh.png

Link to post
Share on other sites

Actually in some areas they actually do, to some extent. More commonly papers that rely on modeling and/or statiscs get replicated in silico,e.g. using different test or validations sets.

Link to post
Share on other sites

Actually in some areas they actually do, to some extent. More commonly papers that rely on modeling and/or statiscs get replicated in silico,e.g. using different test or validations sets.

 

Yeah, I know they do sometimes. I used to think though that they would attempt to replicate all of the experiments from scratch (imagine how difficult, expensive and time-consuming this would be for, for example, work involving genetically modified mice!) and then would reject the paper if they were unable to replicate the findings. redface.gif

 

This does happen, if effect, but obviously not as part of the initially peer review quality control step, which is effectively a crude sifting mechanism. The main refutations come in the form of subsequent publications by other groups who may be unable to replicate the findings of the original paper. The citation index will generally act as a post-publication acid test of relevance, with papers whose findings are replicable but not so important to the field, being stillborn. However, even this is not an infallible test of relevance - think, for example, of the work of revolutionaries throughout the history of Science whose groundbreaking results were largely ignored by their peers and only received their due attention long after the scientist (natural philosopher) had died - and all because their work was too revolutionary for the times.

Link to post
Share on other sites

This does happen, if effect, but obviously not as part of the initially peer review quality control step, which is effectively a crude sifting mechanism. The main refutations come in the form of subsequent publications by other groups who may be unable to replicate the findings of the original paper. The citation index will generally act as a post-publication acid test of relevance, with papers whose findings are replicable but not so important to the field, being stillborn. However, even this is not an infallible test of relevance - think, for example, of the work of revolutionaries throughout the history of Science whose groundbreaking results were largely ignored by their peers and only received their due attention long after the scientist (natural philosopher) had died - and all because their work was too revolutionary for the times.

Sadly there is a large portion of replications that never get published because it's not as 'interesting'. Though I'm not sure if that's due to peer review.

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.