Jump to content

"How to become good at peer review: A guide for young scientists"


hypervalent_iodine

Recommended Posts

I cam across this earlier today and thought it might be interesting to others here.

 

http://violentmetaphors.com/2013/12/13/how-to-become-good-at-peer-review-a-guide-for-young-scientists/

 

I've not had the opportunity to peer review anything or deal with the peer review process at all, so I am unsure if all of these points are really necessary. I'd be interested to hear what others who are more experienced with peer review have to say about it.

 

In one section of the article they bring up the anonymity of peer review and I'm wondering if those who have been through the process consider this to be a draw back? It's not something I had considered, but I agree with the idea that it probably increases the tendency for bullying.

Link to comment
Share on other sites

I'm currently waiting on reviews on a paper I submitted, and I want to emphasize the "timeliness" point. I submitted back in April, replied to reviews in August, and I'm still waiting on a reply from the reviewers. It's embarrassingly slow. This is from an IEEE technical journal.

 

PLOS ONE and a bunch of medical journals, on the other hand, say they'll take about a month or less. BMJ says the first decision will only take two or three weeks.

 

Another interesting point is the ineffectiveness of peer review. The Cochrane Collaboration says

 

At present, little empirical evidence is available to support the use of editorial peer review as a mechanism to ensure quality of biomedical research.

There have been a number of amusing studies where manuscripts are submitted with deliberate errors, which the reviewers usually miss. But I've never seen a viable alternative presented.

Link to comment
Share on other sites

I'm currently waiting on reviews on a paper I submitted, and I want to emphasize the "timeliness" point. I submitted back in April, replied to reviews in August, and I'm still waiting on a reply from the reviewers. It's embarrassingly slow. This is from an IEEE technical journal.

 

PLOS ONE and a bunch of medical journals, on the other hand, say they'll take about a month or less. BMJ says the first decision will only take two or three weeks.

 

I suppose that comes down to the deadlines imposed by indivual journals. In some chemistry journals I know of, the wait is about a few months from submission to publication (all things going well). There are some reviewers who will keep sending back comments for correction for months on end until they are happy with the result. A friend in mathematics is currently experiencing something very similar to this:

 

14-qrpXN.jpg

 

(For anyone who hasn't already seen it, Overly Honest Methods is quite amusing. This is a small collection of some of the best)

 

On a slightly unrelated note, I had a friend who recently got back his PhD thesis from his reviewers for correction after a 3/4 month wait and had his graduation date delayed by 6 months as a result.

 

 

There have been a number of amusing studies where manuscripts are submitted with deliberate errors, which the reviewers usually miss. But I've never seen a viable alternative presented.

 

Yes, I recall the one by the Science correspondent about lichen natural products. It's worrying, to say the least. There are people, such as the author of this blog, who do a stellar job of investigating academic fraud, but one person can only look at so many papers. I suspect a lot of it is that the people reviewing manuscripts simply do not wish to invest the time in doing a thorough job and I can certainly empathise with that after having marked a stack of 3rd year lab reports, but it is a problem. Perhaps one alternative would be to set up independent bodies whose sole job is to review papers for submission to journals?

 

Very recently at my current university and also at the one I studied at, there were instances of two very high ranking professors (one of them the former head of the School of Health Sciences) having some major publications retracted because the experiments simply didn't exist and the data in the paper was incredibly inconsistent. Both should have been picked up at the peer review stage, but they weren't picked up until someone within the groups contacted officials within their respective universities and the universities began their own internal investigations. In at least one case, the NHRC grants that had been issued to the group on the basis of the phoney paper also had to be repaid.

Link to comment
Share on other sites

In my experience in mathematics reviewers tend to looks for originality, novelty and relevance to the field. The higher impact journals tend to be more fussy about the importance and general interest of a paper than the no so high impact journals. Finding mistakes, unless they are obvious tend not to be the job of the reviewer.

 

Anonymity of peer review is a good thing, however for smaller area of science it may be obvious who the reviewers are!

Link to comment
Share on other sites

John Cuthber, on 15 Dec 2013 - 10:10 AM, said:John Cuthber, on 15 Dec 2013 - 10:10 AM, said:

There are many criticisms of peer review and many problems in its implementation.

There's only one real point in its favour: it works better than anything else that has been tried.

I would think the ultimate peer review is peers conducting the same experiment in a research paper and seeing if the results come out in agreement.

Edited by StringJunky
Link to comment
Share on other sites

I would think the ultimate peer review is peers conducting the same experiment in a research paper and seeing if the results come out in agreement.

Replication studies are very difficult, since many papers do not provide details of materials used and methods descriptions are often incomplete. Journals usually won't publish replications either, since they're not novel.

 

Yes, I recall the one by the Science correspondent about lichen natural products. It's worrying, to say the least. There are people, such as the author of this blog, who do a stellar job of investigating academic fraud, but one person can only look at so many papers. I suspect a lot of it is that the people reviewing manuscripts simply do not wish to invest the time in doing a thorough job and I can certainly empathise with that after having marked a stack of 3rd year lab reports, but it is a problem. Perhaps one alternative would be to set up independent bodies whose sole job is to review papers for submission to journals?

 

Very recently at my current university and also at the one I studied at, there were instances of two very high ranking professors (one of them the former head of the School of Health Sciences) having some major publications retracted because the experiments simply didn't exist and the data in the paper was incredibly inconsistent. Both should have been picked up at the peer review stage, but they weren't picked up until someone within the groups contacted officials within their respective universities and the universities began their own internal investigations. In at least one case, the NHRC grants that had been issued to the group on the basis of the phoney paper also had to be repaid.

I don't think it's just intentional fraud or sloppy reviewers. This study of reviewing quality was amusing, for instance, since it tried to train reviewers to spot errors. Reviewers, knowing they were being tested, then looked for errors; they missed most of them. They weren't errors that were signs of deliberate fraud but just sloppy research.

 

On the other hand, if you take papers already published by researchers from prestigious departments and relabel them as being from the "Tri-Valley Center for Human Potential," they will be rejected for "serious methodological flaws" by the same journals that published them.

Link to comment
Share on other sites

Replication studies are very difficult, since many papers do not provide details of materials used and methods descriptions are often incomplete. Journals usually won't publish replications either, since they're not novel.

Organic synthesis often requires that you replicate procedures from journals to achieve whatever synthetic goal you have in mind. Protocols are usually sufficiently detailed (although they tend to skimp on specific technical / equipment information), however all too often the results are not able to be reproduced (this is more problematic in lower impact journals) and the corresponding authors are rarely helpful.

 

This is a fairly niche example of where repetition of other works is commonplace. I doubt, for example, that many groups would spend the time or money repeating a complex natural product total synthesis simply to see if they could.

 

I don't think it's just intentional fraud or sloppy reviewers. This study of reviewing quality was amusing, for instance, since it tried to train reviewers to spot errors. Reviewers, knowing they were being tested, then looked for errors; they missed most of them. They weren't errors that were signs of deliberate fraud but just sloppy research.

 

On the other hand, if you take papers already published by researchers from prestigious departments and relabel them as being from the "Tri-Valley Center for Human Potential," they will be rejected for "serious methodological flaws" by the same journals that published them.

I suspect that the lack of difference between the two groups in the study is probably a result of the training interventions being short and that perhaps something longer and more intensive would produce different results. Again, this leads me to think that having dedicated and fully trained staff for reviewing papers is a better method than having academics with very little spare time or motivation (in general).

 

I am unsurprised by the second point.

Link to comment
Share on other sites

I suspect that the lack of difference between the two groups in the study is probably a result of the training interventions being short and that perhaps something longer and more intensive would produce different results. Again, this leads me to think that having dedicated and fully trained staff for reviewing papers is a better method than having academics with very little spare time or motivation (in general).

Fair enough. I'd be interested to see a comparison between a professional methodological or statistical reviewer and the average harried and overwhelmed academic reviewer.

Link to comment
Share on other sites

I have been reviewing quite a bit and I think I would agree with most points.

 

Some comments though:

Timeliness is an issue but it is very tricky to resolve. One has the editorial process to consider (including finding reviewers in the first place and then reading the reviews and rendering a verdict) but as a reviewer the problem is often to find the time to do it properly. Ideally, you want to have a few hours to read the whole thing, read up on recent lit in case you (or the authors have missed something) etc.

Practically, reviewers (and often also editors) are providing free service, but are busy themselves (between class, lab organizing, doing research, trying to get funding, mentoring students/postdocs, department work etc.).

Finding the hours to do a proper review can be challenging at times.

Especially in multidisciplinary areas or areas that are in a more applied direction but without a strong theoretical framework reviews are tricky. In biomedical sciences for example (which tend to be high risk high reward type of research) often precise mechanisms are not known or extrapolated in order to find associations of diagnostic value or to define new treatments (to give some random examples). Even if statistically sound it is not trivial to assess whether the study will ultimately be validated.

And ideal panel that would able to assess every aspect of a study (analytic methods, cell types, statistical methods, expert on specific pathways etc.) would be very unwieldy and would need additional experts that have specialized in multidisciplinary studies to be able to pull together all the individual aspects into a bigger body. Obviously that amount of effort would not be sustainable and could delay further research significantly. In some cases it is easier to throw out all the good and bad stuff and let the community figure out what is which. Admittedly, with the ever increasing amount of publications it is getting harder and harder.

 

Replicate findings:

This is basically impossible in most of the bio field, but not necessarily due to lack of info. Conducting the experiments tend to be in the order of months, more for more complex studies (e.g. establishing a cell line alone can take up to half a year). In addition, no one would pay for the work in the first place (molecular biology is notoriously expensive). In the end, the validity of a study in these areas have to be assessed over time, and demonstrated by subsequent work and citations.

 

Professional reviewers:

Sounds like an interesting proposal on the surface, However, without doing active research it will be hard to do but the most superficial research. Of course, things like statistical validity can be relatively easily assessed (although without knowing the analytical approach they may have a hard time figuring out e.g. if the data sets are dependent or not, for example). But if you are not an expert in the area, you will miss the fact that e.g. the method only has a certain dynamic range and hence the presented data looks just too neat. Or that a particular cell line has the propensity to behave a certain way under the given experimental condition etc.

These kinds of things are widely known within the given communities, but a professional reviewer who is not up date with current lab practices and techniques will have a hard time to do proper review. And in my assessment, the longer they are away from active research, the harder it gets. It will depend on the are of course, and the stronger the theoretical background is, the easier I presume it is. On the flip side, the more experimental/empirical the area is,the more it will rely on experience.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.