Jump to content

The Future of the Scholarly Peer Review – A Road to Mediocrity?


Will9135

Recommended Posts

Peer review is the evaluation of work by one or more people with similar competences as the producers of the work (peers). It [aims to] function as a form of self-regulation by qualified members of a profession within the relevant field. Peer review methods are used to maintain quality standards, improve performance, and provide credibility. In academia, scholarly peer review is often used to determine an academic paper's suitability for publication.” [Wikipedia, May, 2019]

I am an electrical engineer with a PhD in semiconductor physics, who has been working in the industry for 25 years. Publications in my field of work are typically subject to a single-blind peer review, i.e. the names of the reviewers are hidden from the authors, while for all other relations between the authors, reviewers and the editor the names are disclosed. Unfortunately, I have learned that many publications in my field of work are superficial, inconsistent, incomprehensible and even incorrect. Minor progress that could easily be summarized on 3-4 pages is unnecessarily blown up to 9-10 pages by some authors and reported data, results and conclusions are often contradictory.

As an author, I have learned at the same time that the quality of the peer review has become rather poor.

  • There are unfortunately many reviewers, who do not carefully read the papers that they assess. Some of them are so much biased by their own work or mainstream topics, that they are simply unable to comprehend papers addressing new or special topics. As a result, they complain about missing information, which is actually included and even extensively explained in submitted papers.

  • While some reviewers find the length of a submitted paper too long, others even recommend to extend the same paper.

  • Some reviewers criticise the vocabulary used by authors without having looked up the criticised terms in an English dictionary themselves.

  • Some reviewers demand an extensive collection of references even when novelties are reported for the first time, i.e. when the corresponding topic has not been covered by other publications for whatever reasons so far. At the same time, these reviewers claim that the novelty disclosed to them in the submitted paper was “not new” without providing at least one single reference as proof. These reviewers ignore completely how difficult it is to prove a novelty on the one hand and how easy it is to prove the well-known on the other hand. – Now, a reader could object that “novelty” often depends on the individual perspective. So let me give you an example: A paper based on a recently granted patent was submitted subsequently to two journals (Elsevier and IEEE) in order to be reviewed by 7 reviewers. The patent was not listed as a reference in that paper. 5 reviewers considered the subject of the paper to be “not new” and none of those 5 reviewers provided any reference as proof.

  • Some reviewers indiscriminately demand device simulations or special measurements, although the presented results have been obtained by other well-established methodologies and are absolutely plausible and consistent.

  • Some well-known journals apply unequal criteria as the documentation of the instruments and methodologies used are concerned. In some published papers reporting results obtained by circuit simulation, the simulator used is not even mentioned, whereas in other submitted papers the results obtained by a specified state-of-the-art circuit simulator with production-level models are considered by the reviewer to be "crude".

  • Some reviewers try to influence the topic or character of a paper. E.g. by the requirements they make, they urge the author(s) of an original research paper to revise that paper in order to become a tutorial.

  • Some reviewers make far fetched speculations and fabricate reasons in order to deliberately reject an unwanted paper.

  • To some reviewers and editors, the reputation or (assumed) ethnic background of the author(s) appears to be more important for their assessment than the contents of the submitted paper.

Similar experiences have been summarised a few years ago by Richard Smith (“The peer review drugs don’t work”, 28 May 2015).

All renowned journals and conferences point out the ethical guidelines authors have to comply with. However, as reviewers are concerned, it seems to me that they can act just as they want. While such journals and conferences are eager to point out the consequences for authors, who do not adhere to their ethical guidelines, it is very difficult for an author to submit a complaint about an unqualified review in order to have it independently and objectively assessed and (if needed) corrected. And if an author succeeds to submit a complaint about an unqualified review, the chances are good that the complaint is not carefully read and that the corresponding reviewer is even defended by the editor or the TPC chair.

As a consequence, I doubt that nowadays the peer review still helps “to maintain quality standards, improve performance, and provide credibility”. Quite the contrary, driven by the longing for recognition and the prestige of a high number of publications on the one hand and protected by the anonymity of the peer review process and by the lack of transparency and checks on the other hand, the original purpose of the peer review is more and more undermined by conceited and overconfident reviewers, busy and uncritical editors, and authority-biased and profit-driven publishers. This is no road to excellence, but to mediocrity.

What can be done in order to improve the quality of the peer review? Here are a couple of proposals that you may want to comment or extend:

  • Single-blind reviews are unbalanced and unfair. They should be replaced at least by double-blind or triple-blind reviews or even better by open reviews.

  • For transparency reasons, reviewer guidelines should be added to the author’s kit of journals and conferences.

  • As authors are required to provide references or conclusive proof for their statements and conclusions, reviewers should also be required to provide references or conclusive proof when they disagree. This is not only fair, but allows to resolve possible misunderstandings on both sides. If reviewers do not provide references or conclusive proof, their comments should be disregarded by the responsible editor or TPC chair.

  • In order to prevent us from authority bias and personality cult, the number of extended abstracts a reviewer is allowed to review per conference should be limited to 10 and the number of full-sized papers a reviewer is allowed to review per journal or conference should be limited to 1 per month. Furthermore, scientists and academics should not be allowed to participate as reviewers in more than 4 and as editors or TPC chairs in more than 2 journals or conferences per year.

  • In order to prevent journals and conferences from ethnic bias, editors and TPC chairs should select reviewers with diverse ethnic background to review submitted papers and extended abstracts.

Regards

Will

 

Link to comment
Share on other sites

1 hour ago, Will9135 said:

 

  • Single-blind reviews are unbalanced and unfair. They should be replaced at least by double-blind or triple-blind reviews or even better by open reviews.

 

Triple blind - so even the editors don't know who is reviewing what? How would we ensure the reviewers aren't mortal enemies or best buddies with the authors?

Maybe add preprint publication as a formal part of the publication process, to gain an informal peer-review from people interested in the field. Went to workshop recently in which an ex-Nature editor was arguing for this.

Link to comment
Share on other sites

With a properly developed and nurtured ethic, that appears to have withered some in the last few decades, I consider blind reviewing should be unnecessary except in a minority of cases.

One practice I noticed with papers has been the great increase in statistical methods employed by writers with insufficient statistical maturity to discuss or defend the results so obtained.
I think this is because many place now have 'works statisticians' who do the actual analysis or supervise the entry into compute programs the writers cannot understand by themselves.
This is then compounded by peer reviewers who may be the world's experts in the subject matter of the paper, but still not savvy staticians.
We can't, after all be good at everything.

Link to comment
Share on other sites

1 minute ago, studiot said:

With a properly developed and nurtured ethic, that appears to have withered some in the last few decades, I consider blind reviewing should be unnecessary except in a minority of cases.

One practice I noticed with papers has been the great increase in statistical methods employed by writers with insufficient statistical maturity to discuss or defend the results so obtained.
I think this is because many place now have 'works statisticians' who do the actual analysis or supervise the entry into compute programs the writers cannot understand by themselves.
This is then compounded by peer reviewers who may be the world's experts in the subject matter of the paper, but still not savvy staticians.
We can't, after all be good at everything.

Do papers like that only get sent to subject matter experts, or do they also get sent to people who do have expertise in statistical analysis? Because I assume both aspects are in need of peer review.

Link to comment
Share on other sites

3 minutes ago, Strange said:

Do papers like that only get sent to subject matter experts, or do they also get sent to people who do have expertise in statistical analysis? Because I assume both aspects are in need of peer review.

Statistical analysis was only one example.

In my view the review panel should have as wide experience as possible,

Perhaps that way a proper trial of Thalidomide might have been made and all those tragedies avoided.

Link to comment
Share on other sites

12 minutes ago, studiot said:

Statistical analysis was only one example.

In my view the review panel should have as wide experience as possible,

Well, yes. Which is why I asked about that example. Does this answer mean that your understanding is that currently peer review is limited to the core subject of the paper and not statistical analysis (or whatever other expertise could usefully be applied)?

 

Link to comment
Share on other sites

Perhaps the way to improve peer review is to not make it blind.  Circumstances are admittedly different, but I once worked in the Nuclear power industry, and every engineering document was required to be verified by a separate engineer.  This was a form of peer review.  The difference was that the name and signature of the review engineer was placed on the publication along with the name and signature of the originating engineer.  If the document turned out to be wrong, the reputations of both the originator and the reviewer were on the line.

Link to comment
Share on other sites

10 minutes ago, OldChemE said:

Perhaps the way to improve peer review is to not make it blind.  Circumstances are admittedly different, but I once worked in the Nuclear power industry, and every engineering document was required to be verified by a separate engineer.  This was a form of peer review.  The difference was that the name and signature of the review engineer was placed on the publication along with the name and signature of the originating engineer.  If the document turned out to be wrong, the reputations of both the originator and the reviewer were on the line.

Interesting idea, but isn’t scientific peer review more like getting an engineer from a competitor to review your document? (I don’t think this contradicts your suggestion but it does add a twist)

Link to comment
Share on other sites

OP makes good points. I'd like to add that for grant reviews the situation is even worse. During grant season panels have often to look easily through dozens or more applications. They have to evaluate things that are only proposed. There is generally no means for a discussion or rebuttal between applicant and reviewer. Reviewers can be from the wrong field or in some cases even direct competitors for the same pool of money.

Link to comment
Share on other sites

9 hours ago, Strange said:

Does this answer mean that your understanding is that currently peer review is limited to the core subject of the paper and not statistical analysis (or whatever other expertise could usefully be applied)?

I know some of the big publishing houses have in-house statisticians to look over stats (but not if its a paper primarily about stats).

Link to comment
Share on other sites

The problems with "peer review" are even more obvious in current day Egyptology.  They claim to be science despite their lack of systematic application of modern knowledge and science to the study of pyramids.  They normally just refuse testing and have even chided scientists in the last couple of years for proposing testing (one offered to fly a balloon drone into the pyramid to see what lay beyond).   Across the board there is testing and measurements they refuse to gather.  Believe it or not strategraphic archaeology nor forensic testing has ever been performed in any pyramid!

But now there is a far worse problem that shows the utter failure of peer review and this applies to every discipline.  Infrared data was finally gathered starting in October of 2015 and this information has NEVER been supplied to peers.   They are withholding from the public because it apparently does not agree with what Egyptologists  believe (the powers that be have said no data that disagree with the paradigm will  ever be released).  There is no mechanism for distributing it to peers so their review of the data is impossible.   If they did distribute the data to all the Egyptologists whom they could identify, the information would almost immediately hit the internet.  

"Peer review" is an irrelevancy.  Reality is seen principally in experiment and the opinion of fools and scholars alike has no effect on experiment and no causation of reality.  This is likely the cause of the failure in education and the cause of soup of the day science which is training the general public to ignore sound and flaky science as well.  We are rushing headlong into a dangerous future where the roadsigns are determined not by the road ahead but by peers describing the road behind.  

We are in serious trouble but it is not well seen.  

Edited by cladking
Link to comment
Share on other sites

While the peers have not even seen the data it is apparent that the powers that be are investigating the void discovered 160' south of the NE corner of the Great Pyramid.

 

imageproxy.php?img=&key=f247047110aebb70DSC00050-e1537704974435.jpg

 

In the center right of this picture (click twice on the picture and scroll to center) can be seen what is most probably an endoscope guide they used to discover the void they never publicly admitted was causing the heat anomaly and which led to the proposal to insert a drone balloon.     One of the Egyptologists even announced there was a natural fissure in live rock behind here yet no such observation can be made exterior to the pyramid.  

So there is a stalemate where even peers aren't allowed the data.   I suppose next we'll have loyalty oaths and non-disclosure agreements.  Peerhood will be stripped from those who dare to cite facts and evidence.   

These are dangerous times largely because of the perpetuation of "scientific" beliefs that date to the 19th century.  

On 5/19/2019 at 3:42 AM, Will9135 said:

 

  • Single-blind reviews are unbalanced and unfair. They should be replaced at least by double-blind or triple-blind reviews or even better by open reviews.

  • For transparency reasons, reviewer guidelines should be added to the author’s kit of journals and conferences.

  • As authors are required to provide references or conclusive proof for their statements and conclusions, reviewers should also be required to provide references or conclusive proof when they disagree. This is not only fair, but allows to resolve possible misunderstandings on both sides. If reviewers do not provide references or conclusive proof, their comments should be disregarded by the responsible editor or TPC chair.

  • In order to prevent us from authority bias and personality cult, the number of extended abstracts a reviewer is allowed to review per conference should be limited to 10 and the number of full-sized papers a reviewer is allowed to review per journal or conference should be limited to 1 per month. Furthermore, scientists and academics should not be allowed to participate as reviewers in more than 4 and as editors or TPC chairs in more than 2 journals or conferences per year.

  • In order to prevent journals and conferences from ethnic bias, editors and TPC chairs should select reviewers with diverse ethnic background to review submitted papers and extended abstracts.

 

 

 The only way to fix "peer review" is to eliminate it.

If anyone wants to know whether some scientist is real in the eyes of his peers then he can just google it.  Or the same conferences and meeting can be used to rate the peers instead of their studies or experiments.  Obviously experts in every field are the individuals we need to seek for opinions about interpretation of experiment, experiment quality, and the meaning of experiment.  

 

I would propose that peer review be scrapped and a new step added;  "Metaphysical Implications".   

Edited by cladking
Link to comment
Share on other sites

2 hours ago, swansont said:

One should note (and perhaps be concerned) that we have people who are ostensibly scientists/professionals who are providing anecdotes rather than data in their critique of the system.

That is a fair point, too. The big issue here is that there are not a lot of good ways to properly test the impact of peer-review and perhaps as a consequence there are a lot of different models out there in the publishing world. So far, single-blinded is the most prevalent one, but others have been around and there is no conclusive evidence (to my knowledge) that any is superior to the others. Editorial preferences have a big impact on how peer-review is evaluated and there are also a lot of field-specific differences.

That being said, OP mentioned a lot of points that quite a lot of researchers will find themselves in agreement to, either as author or as reviewer. A couple of points perhaps to the suggestion, as these are more difficult than the criticisms.

On 5/19/2019 at 2:42 AM, Will9135 said:

Single-blind reviews are unbalanced and unfair. They should be replaced at least by double-blind or triple-blind reviews or even better by open reviews.

It has not been shown that open reviews are inherently better. It does seem that in some cases referees are a bit "nicer", which does not inherently lead to better quality of papers. Ideally, reviewers should help improving quality of publications rather than doing handwavy suggestions or only demand more work. However, that often takes more time commitment than one has as a reviewer.

 

On 5/19/2019 at 2:42 AM, Will9135 said:

For transparency reasons, reviewer guidelines should be added to the author’s kit of journals and conferences

A reviewers guide is provided for pretty much all journals I have published in (in different disciplines). 

 

On 5/19/2019 at 2:42 AM, Will9135 said:

As authors are required to provide references or conclusive proof for their statements and conclusions, reviewers should also be required to provide references or conclusive proof when they disagree. This is not only fair, but allows to resolve possible misunderstandings on both sides. If reviewers do not provide references or conclusive proof, their comments should be disregarded by the responsible editor or TPC chair.

Most reviewers provide references, if not one can attack that in a rebuttal (which e.g. is usually not available for grant reviews).

On 5/19/2019 at 2:42 AM, Will9135 said:

In order to prevent us from authority bias and personality cult, the number of extended abstracts a reviewer is allowed to review per conference should be limited to 10 and the number of full-sized papers a reviewer is allowed to review per journal or conference should be limited to 1 per month. Furthermore, scientists and academics should not be allowed to participate as reviewers in more than 4 and as editors or TPC chairs in more than 2 journals or conferences per year.

Well, I doubt the authority bias plays a role here. However, quite a few folks I know accept all review request to bolster their CV but let a postdoc (or even just a grad student) have a first read. In my mind this should not happen.

 

On 5/19/2019 at 2:42 AM, Will9135 said:

In order to prevent journals and conferences from ethnic bias, editors and TPC chairs should select reviewers with diverse ethnic background to review submitted papers and extended abstracts.

I do not necessarily disagree, however depending on topic finding suitable reviewers can be difficult enough.

Link to comment
Share on other sites

8 hours ago, cladking said:

While the peers have not even seen the data it is apparent that the powers that be are investigating the void discovered 160' south of the NE corner of the Great Pyramid.

 

imageproxy.php?img=&key=f247047110aebb70DSC00050-e1537704974435.jpg

 

In the center right of this picture (click twice on the picture and scroll to center) can be seen what is most probably an endoscope guide they used to discover the void they never publicly admitted was causing the heat anomaly and which led to the proposal to insert a drone balloon.     One of the Egyptologists even announced there was a natural fissure in live rock behind here yet no such observation can be made exterior to the pyramid.  

So there is a stalemate where even peers aren't allowed the data.   I suppose next we'll have loyalty oaths and non-disclosure agreements.  Peerhood will be stripped from those who dare to cite facts and evidence.   

These are dangerous times largely because of the perpetuation of "scientific" beliefs that date to the 19th century.  

 The only way to fix "peer review" is to eliminate it.

If anyone wants to know whether some scientist is real in the eyes of his peers then he can just google it.  Or the same conferences and meeting can be used to rate the peers instead of their studies or experiments.  Obviously experts in every field are the individuals we need to seek for opinions about interpretation of experiment, experiment quality, and the meaning of experiment.  

 

I would propose that peer review be scrapped and a new step added;  "Metaphysical Implications".   

 

!

Moderator Note

Just a gentle reminder that we don't want to get bogged down into the specifics of one paper / discovery / etc, least the thread get derailed. 

 
Link to comment
Share on other sites

On 5/19/2019 at 11:49 AM, Prometheus said:

Triple blind - so even the editors don't know who is reviewing what? How would we ensure the reviewers aren't mortal enemies or best buddies with the authors?

In the triple-blind review I envision, editors, reviewers and authors must not be with the same company or organization and the identities of authors, reviewers and editors are hidden from each other. The responsible editor selects peers to review a submitted paper from a pool of reviewers only by means of their qualification but not by their names or affiliation. Since the identities of the authors are not disclosed to the reviewers, reviewers can only speculate based on the contents of a submitted paper about the identity of its author(s). And since reviewers do not know about the identity of the responsible editor, they cannot expect to be protected by the editor, if they run a superficial review. Likewise, the responsible editor has no interest to prefer one review over another. If in turn, authors are less self-centred and do not add references predominantly to their own papers, there is little chance for a reviewer to identify a “mortal enemy” or a “best buddy” among the authors of a submitted paper. In this way (I hope), the unbalance and unfairness of the single-blind review can largely be avoided.

Regards

Will

On 5/19/2019 at 11:36 PM, beecee said:

Nothing is perfect but the general scientific review system is the best we have and in the majority of cases through history, has been of great benefit.

I am sorry, this argument is too simple to be taken seriously. If this reasoning was valid, there was no continuous improvement and no advances in science at all.

Regards

Will

On 5/20/2019 at 2:11 AM, OldChemE said:

Perhaps the way to improve peer review is to not make it blind.  Circumstances are admittedly different, but I once worked in the Nuclear power industry, and every engineering document was required to be verified by a separate engineer.  This was a form of peer review.  The difference was that the name and signature of the review engineer was placed on the publication along with the name and signature of the originating engineer.  If the document turned out to be wrong, the reputations of both the originator and the reviewer were on the line.

 

On 5/20/2019 at 2:23 AM, Strange said:

Interesting idea, but isn’t scientific peer review more like getting an engineer from a competitor to review your document? (I don’t think this contradicts your suggestion but it does add a twist)

I think the example given by OldChemE is an advantage for an open review. Of course, even in an open review a paper of an author should not be reviewed by a peer of the same company or organization.

Regards

Will

On 5/20/2019 at 3:31 AM, CharonY said:

I'd like to add that for grant reviews the situation is even worse. During grant season panels have often to look easily through dozens or more applications. They have to evaluate things that are only proposed. There is generally no means for a discussion or rebuttal between applicant and reviewer. Reviewers can be from the wrong field or in some cases even direct competitors for the same pool of money.

The same applies to the extended abstracts required by (some) conferences. In order to reduce the overall workload for the reviewers, many conferences ask authors to submit only an extended abstract of their paper. If the final paper has already been worked out and is complete, the authors have to spend extra effort in order to prepare an extended abstract. If the authors have not even begun to work on the final paper (which happens very often), the abstract is likely to become superficial and overloaded with exaggerated promises about the contents of the final paper. Anyway, extended abstracts cannot cover the complete contents of the final paper. Consequently, almost always a number of open questions remain after the review. There is no means to discuss and answer these open questions between the reviewer and the author. As a result, a successful submission of a conference paper does not only depend on its scientific contents but also on the advertising skills of its authors and on the reputation of the authors (“authority bias”).

Regards

Will

On 5/20/2019 at 6:11 PM, cladking said:

But now there is a far worse problem that shows the utter failure of peer review and this applies to every discipline.  Infrared data was finally gathered starting in October of 2015 and this information has NEVER been supplied to peers.   They are withholding from the public because it apparently does not agree with what Egyptologists  believe (the powers that be have said no data that disagree with the paradigm will  ever be released).  There is no mechanism for distributing it to peers so their review of the data is impossible.

"Peer review" is an irrelevancy.  Reality is seen principally in experiment and the opinion of fools and scholars alike has no effect on experiment and no causation of reality.

While it is fascinating to see, how much we can learn from the history of mankind, I am not really familiar with the challenges of archaeology and Egyptology. However, I have learned that there are communities that do not like to be criticised, regardless if the critique included in a submitted paper is proven and justified or not. As a result, sometimes a paper or presentation is also rejected for political reasons. So far, I have experienced such a case only once.

Experiments are definitely important in order to prove hypotheses and theories. However, I would not make them mandatory in all scientific disciplines. E.g., if experimental proof would have been mandatory, Albert Einstein’s papers on general relativity and the predicted gravitational waves could not have been published about 100 years ago. Also in engineering sciences many simulation tools have been developed and are used in order to predict the response of real-world systems to a certain physical stimulus. As long as results of these tools have been proven to match physical effects and the results reported in a submitted paper are physically plausible and the used tools are disclosed in that paper, I have no problem to accept such results as proof.

Regards

Will

On 5/20/2019 at 6:22 PM, swansont said:

One should note (and perhaps be concerned) that we have people who are ostensibly scientists/professionals who are providing anecdotes rather than data in their critique of the system.

Well, I have not much experience with this forum. After all, this is my third post. However, I am aware that the matter I invited you to discuss is a little bit delicate. On the one hand, it is important to analyse the details of today’s peer review processes and substantiate pros and cons of these processes. On the other hand, if we disclose to many details on individual reviews, the identity of persons may be disclosed and their career might be damaged.

Regards

Will

On 5/20/2019 at 9:09 PM, CharonY said:

It has not been shown that open reviews are inherently better. It does seem that in some cases referees are a bit "nicer", which does not inherently lead to better quality of papers. Ideally, reviewers should help improving quality of publications rather than doing handwavy suggestions or only demand more work. However, that often takes more time commitment than one has as a reviewer.

I am curious, what open reviews are you referring to? Are there any references that you can share? Besides, I second your third and fourth sentence. In fact, to me the last sentence addresses one of the root causes of the problem, i.e. the time commitment. If I decide to assume the responsibility of a reviewer, I have to spend the necessary time to carefully review submitted papers. If I cannot spend the time for a careful review, I cannot become a reviewer.

On 5/20/2019 at 9:09 PM, CharonY said:

A reviewers guide is provided for pretty much all journals I have published in (in different disciplines). 

Lucky you are :). I remember searching for more than one hour on the home pages of a couple of well-known journals for reviewer guidelines. And when I finally found them, there was not a single word mentioned about the ethical responsibility of a reviewer. However, the same journals spent an entire section in their author’s guidelines on the consequences of unethical behaviour of an author.

On 5/20/2019 at 9:09 PM, CharonY said:

Most reviewers provide references, if not one can attack that in a rebuttal (which e.g. is usually not available for grant reviews).

Maybe most reviewers you have been dealing with :). All reviewers I have been dealing with have not provided a single reference. And yes, I did attack them in a rebuttal and requested references for their claims. Not a single reviewer provided these references and the responsible editor let them get away with it. And the corresponding journal was no insignificant journal, but an IEEE Transactions journal.

On 5/20/2019 at 9:09 PM, CharonY said:

Well, I doubt the authority bias plays a role here. However, quite a few folks I know accept all review request to bolster their CV but let a postdoc (or even just a grad student) have a first read. In my mind this should not happen.

It really depends on the cultural background and education of the person you are talking to. While authority bias is rarely found in Europe, it is much more common in the US and in Asia. E.g. after I had submitted my detailed rebuttal and complained to the editor of the aforementioned IEEE journal about the poor quality of the reviews, he did not refer to the contents of my paper or my rebuttal but defended the reviews by referring to the “Senior Member” status of the reviewers. This is what I call “authority bias”.

Regards

Will

Link to comment
Share on other sites

2 hours ago, Will9135 said:

Well, I have not much experience with this forum. After all, this is my third post. However, I am aware that the matter I invited you to discuss is a little bit delicate. On the one hand, it is important to analyse the details of today’s peer review processes and substantiate pros and cons of these processes. On the other hand, if we disclose to many details on individual reviews, the identity of persons may be disclosed and their career might be damaged.

An analysis of peer review would be fine, but anecdotes are not data. No analysis has been presented. 

A complaint about a review means nothing without an independent assessment - how do we know the bad review was of a quality paper?

 

Link to comment
Share on other sites

4 hours ago, Will9135 said:

I am sorry, this argument is too simple to be taken seriously. If this reasoning was valid, there was no continuous improvement and no advances in science at all.

Regards

Will

Different disciplines may require some variations in certain aspects, and I have not denied or said that improvements may not or cannot be made. But the basis behind peer review is and remains the best form of review we do have and contrary to what you have suggested, is the prime reason why science is a discipline in continued progress and advancement,  as well of course sorting the wheat from the chaff. 

Link to comment
Share on other sites

The issues under discussion are two-fold, publication of low quality papers passing peer-review and publications of sufficient quality not passing. Both have different reasons and require different mechanism. I would argue that the latter is generally not fundamental issue, if one is rejected by one journal, one simply chooses another one. At least in my area there are so many choices that it usually gets out somewhere. I will say that in high-stakes journals (e.g. Nature and Science) things are a bit different and there is a lot of struggle involved. I am not entirely sure I like the process surrounding those high-prestige publications.

 

On 5/26/2019 at 8:45 AM, Will9135 said:

I am curious, what open reviews are you referring to? Are there any references that you can share? Besides, I second your third and fourth sentence. In fact, to me the last sentence addresses one of the root causes of the problem, i.e. the time commitment. If I decide to assume the responsibility of a reviewer, I have to spend the necessary time to carefully review submitted papers. If I cannot spend the time for a careful review, I cannot become a reviewer.

There are a number of journals and publishers (MDPI, BMC and some others I cannot recall off the top of my head). BMC also offers double-blind for some. With regard to the weaknesses of the single-blind and open review it generally is known (and I am pretty sure that there is data for it somewhere.. I think I saw something recently in PNAS) that famous authors may benefit from it. But this is pretty much an open secret and double-blind would address that to some degree, but not in all fields (as it is often trivial to identify the big shots).

 

On 5/26/2019 at 8:45 AM, Will9135 said:

All reviewers I have been dealing with have not provided a single reference. And yes, I did attack them in a rebuttal and requested references for their claims. Not a single reviewer provided these references and the responsible editor let them get away with it. And the corresponding journal was no insignificant journal, but an IEEE Transactions journal.

I got rejected from an IEEE journal once but do not recall significant issue aside from nitpicking that I found unnecessary to address (the work to outcome ratio was way off) so I moved to a related journal with higher impact factor and got in there. Some fights are just not worth fighing.

 

On 5/26/2019 at 8:45 AM, Will9135 said:

It really depends on the cultural background and education of the person you are talking to. While authority bias is rarely found in Europe, it is much more common in the US and in Asia. E.g. after I had submitted my detailed rebuttal and complained to the editor of the aforementioned IEEE journal about the poor quality of the reviews, he did not refer to the contents of my paper or my rebuttal but defended the reviews by referring to the “Senior Member” status of the reviewers. This is what I call “authority bias”.

My experience is quite the reverse. In Europe I found that if a big shots states something it is excruciatingly difficult to counter it. I had far easier time with US editors. While feuds are a known thing everywhere, I found in the US much easier to navigate them. In Europe it is much more... vicious.

Link to comment
Share on other sites

I think, we all agree that at the end of the day we want high-quality papers published and presentations given. Unfortunately, such publications cannot be simply ensured by blind trust in the statements of their authors. In order to ensure high-quality publications, they have to be reviewed by people with expertise and experience on the given subject, in other words they have to be reviewed by peers.

However, high-quality publications require high-quality reviews. Following the same logic, such reviews cannot be simply ensured by blind trust in the assessments of reviewers and/or editors. It would be naïve to expect reviewers and editors (unlike authors) to be infallible and immune to human weaknesses (“Quis custodiet ipsos custodes?”). Unfortunately, the majority of the peer reviews seems to be single-blind reviews, which more or less exercise this blind trust.

So, single-blind reviews suffer from a systemic weakness independent of the subject or quality of a submitted paper or presentation. If we are really interested in high-quality publications, we should as well be interested in high-quality reviews and strive to correct the systemic weakness of the single-blind peer review.

Regards

Will

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.