Jump to content

Will9135

Members
  • Posts

    5
  • Joined

  • Last visited

Posts posted by Will9135

  1. I think, we all agree that at the end of the day we want high-quality papers published and presentations given. Unfortunately, such publications cannot be simply ensured by blind trust in the statements of their authors. In order to ensure high-quality publications, they have to be reviewed by people with expertise and experience on the given subject, in other words they have to be reviewed by peers.

    However, high-quality publications require high-quality reviews. Following the same logic, such reviews cannot be simply ensured by blind trust in the assessments of reviewers and/or editors. It would be naïve to expect reviewers and editors (unlike authors) to be infallible and immune to human weaknesses (“Quis custodiet ipsos custodes?”). Unfortunately, the majority of the peer reviews seems to be single-blind reviews, which more or less exercise this blind trust.

    So, single-blind reviews suffer from a systemic weakness independent of the subject or quality of a submitted paper or presentation. If we are really interested in high-quality publications, we should as well be interested in high-quality reviews and strive to correct the systemic weakness of the single-blind peer review.

    Regards

    Will

  2. On 5/19/2019 at 11:49 AM, Prometheus said:

    Triple blind - so even the editors don't know who is reviewing what? How would we ensure the reviewers aren't mortal enemies or best buddies with the authors?

    In the triple-blind review I envision, editors, reviewers and authors must not be with the same company or organization and the identities of authors, reviewers and editors are hidden from each other. The responsible editor selects peers to review a submitted paper from a pool of reviewers only by means of their qualification but not by their names or affiliation. Since the identities of the authors are not disclosed to the reviewers, reviewers can only speculate based on the contents of a submitted paper about the identity of its author(s). And since reviewers do not know about the identity of the responsible editor, they cannot expect to be protected by the editor, if they run a superficial review. Likewise, the responsible editor has no interest to prefer one review over another. If in turn, authors are less self-centred and do not add references predominantly to their own papers, there is little chance for a reviewer to identify a “mortal enemy” or a “best buddy” among the authors of a submitted paper. In this way (I hope), the unbalance and unfairness of the single-blind review can largely be avoided.

    Regards

    Will

    On 5/19/2019 at 11:36 PM, beecee said:

    Nothing is perfect but the general scientific review system is the best we have and in the majority of cases through history, has been of great benefit.

    I am sorry, this argument is too simple to be taken seriously. If this reasoning was valid, there was no continuous improvement and no advances in science at all.

    Regards

    Will

    On 5/20/2019 at 2:11 AM, OldChemE said:

    Perhaps the way to improve peer review is to not make it blind.  Circumstances are admittedly different, but I once worked in the Nuclear power industry, and every engineering document was required to be verified by a separate engineer.  This was a form of peer review.  The difference was that the name and signature of the review engineer was placed on the publication along with the name and signature of the originating engineer.  If the document turned out to be wrong, the reputations of both the originator and the reviewer were on the line.

     

    On 5/20/2019 at 2:23 AM, Strange said:

    Interesting idea, but isn’t scientific peer review more like getting an engineer from a competitor to review your document? (I don’t think this contradicts your suggestion but it does add a twist)

    I think the example given by OldChemE is an advantage for an open review. Of course, even in an open review a paper of an author should not be reviewed by a peer of the same company or organization.

    Regards

    Will

    On 5/20/2019 at 3:31 AM, CharonY said:

    I'd like to add that for grant reviews the situation is even worse. During grant season panels have often to look easily through dozens or more applications. They have to evaluate things that are only proposed. There is generally no means for a discussion or rebuttal between applicant and reviewer. Reviewers can be from the wrong field or in some cases even direct competitors for the same pool of money.

    The same applies to the extended abstracts required by (some) conferences. In order to reduce the overall workload for the reviewers, many conferences ask authors to submit only an extended abstract of their paper. If the final paper has already been worked out and is complete, the authors have to spend extra effort in order to prepare an extended abstract. If the authors have not even begun to work on the final paper (which happens very often), the abstract is likely to become superficial and overloaded with exaggerated promises about the contents of the final paper. Anyway, extended abstracts cannot cover the complete contents of the final paper. Consequently, almost always a number of open questions remain after the review. There is no means to discuss and answer these open questions between the reviewer and the author. As a result, a successful submission of a conference paper does not only depend on its scientific contents but also on the advertising skills of its authors and on the reputation of the authors (“authority bias”).

    Regards

    Will

    On 5/20/2019 at 6:11 PM, cladking said:

    But now there is a far worse problem that shows the utter failure of peer review and this applies to every discipline.  Infrared data was finally gathered starting in October of 2015 and this information has NEVER been supplied to peers.   They are withholding from the public because it apparently does not agree with what Egyptologists  believe (the powers that be have said no data that disagree with the paradigm will  ever be released).  There is no mechanism for distributing it to peers so their review of the data is impossible.

    "Peer review" is an irrelevancy.  Reality is seen principally in experiment and the opinion of fools and scholars alike has no effect on experiment and no causation of reality.

    While it is fascinating to see, how much we can learn from the history of mankind, I am not really familiar with the challenges of archaeology and Egyptology. However, I have learned that there are communities that do not like to be criticised, regardless if the critique included in a submitted paper is proven and justified or not. As a result, sometimes a paper or presentation is also rejected for political reasons. So far, I have experienced such a case only once.

    Experiments are definitely important in order to prove hypotheses and theories. However, I would not make them mandatory in all scientific disciplines. E.g., if experimental proof would have been mandatory, Albert Einstein’s papers on general relativity and the predicted gravitational waves could not have been published about 100 years ago. Also in engineering sciences many simulation tools have been developed and are used in order to predict the response of real-world systems to a certain physical stimulus. As long as results of these tools have been proven to match physical effects and the results reported in a submitted paper are physically plausible and the used tools are disclosed in that paper, I have no problem to accept such results as proof.

    Regards

    Will

    On 5/20/2019 at 6:22 PM, swansont said:

    One should note (and perhaps be concerned) that we have people who are ostensibly scientists/professionals who are providing anecdotes rather than data in their critique of the system.

    Well, I have not much experience with this forum. After all, this is my third post. However, I am aware that the matter I invited you to discuss is a little bit delicate. On the one hand, it is important to analyse the details of today’s peer review processes and substantiate pros and cons of these processes. On the other hand, if we disclose to many details on individual reviews, the identity of persons may be disclosed and their career might be damaged.

    Regards

    Will

    On 5/20/2019 at 9:09 PM, CharonY said:

    It has not been shown that open reviews are inherently better. It does seem that in some cases referees are a bit "nicer", which does not inherently lead to better quality of papers. Ideally, reviewers should help improving quality of publications rather than doing handwavy suggestions or only demand more work. However, that often takes more time commitment than one has as a reviewer.

    I am curious, what open reviews are you referring to? Are there any references that you can share? Besides, I second your third and fourth sentence. In fact, to me the last sentence addresses one of the root causes of the problem, i.e. the time commitment. If I decide to assume the responsibility of a reviewer, I have to spend the necessary time to carefully review submitted papers. If I cannot spend the time for a careful review, I cannot become a reviewer.

    On 5/20/2019 at 9:09 PM, CharonY said:

    A reviewers guide is provided for pretty much all journals I have published in (in different disciplines). 

    Lucky you are :). I remember searching for more than one hour on the home pages of a couple of well-known journals for reviewer guidelines. And when I finally found them, there was not a single word mentioned about the ethical responsibility of a reviewer. However, the same journals spent an entire section in their author’s guidelines on the consequences of unethical behaviour of an author.

    On 5/20/2019 at 9:09 PM, CharonY said:

    Most reviewers provide references, if not one can attack that in a rebuttal (which e.g. is usually not available for grant reviews).

    Maybe most reviewers you have been dealing with :). All reviewers I have been dealing with have not provided a single reference. And yes, I did attack them in a rebuttal and requested references for their claims. Not a single reviewer provided these references and the responsible editor let them get away with it. And the corresponding journal was no insignificant journal, but an IEEE Transactions journal.

    On 5/20/2019 at 9:09 PM, CharonY said:

    Well, I doubt the authority bias plays a role here. However, quite a few folks I know accept all review request to bolster their CV but let a postdoc (or even just a grad student) have a first read. In my mind this should not happen.

    It really depends on the cultural background and education of the person you are talking to. While authority bias is rarely found in Europe, it is much more common in the US and in Asia. E.g. after I had submitted my detailed rebuttal and complained to the editor of the aforementioned IEEE journal about the poor quality of the reviews, he did not refer to the contents of my paper or my rebuttal but defended the reviews by referring to the “Senior Member” status of the reviewers. This is what I call “authority bias”.

    Regards

    Will

  3. Peer review is the evaluation of work by one or more people with similar competences as the producers of the work (peers). It [aims to] function as a form of self-regulation by qualified members of a profession within the relevant field. Peer review methods are used to maintain quality standards, improve performance, and provide credibility. In academia, scholarly peer review is often used to determine an academic paper's suitability for publication.” [Wikipedia, May, 2019]

    I am an electrical engineer with a PhD in semiconductor physics, who has been working in the industry for 25 years. Publications in my field of work are typically subject to a single-blind peer review, i.e. the names of the reviewers are hidden from the authors, while for all other relations between the authors, reviewers and the editor the names are disclosed. Unfortunately, I have learned that many publications in my field of work are superficial, inconsistent, incomprehensible and even incorrect. Minor progress that could easily be summarized on 3-4 pages is unnecessarily blown up to 9-10 pages by some authors and reported data, results and conclusions are often contradictory.

    As an author, I have learned at the same time that the quality of the peer review has become rather poor.

    • There are unfortunately many reviewers, who do not carefully read the papers that they assess. Some of them are so much biased by their own work or mainstream topics, that they are simply unable to comprehend papers addressing new or special topics. As a result, they complain about missing information, which is actually included and even extensively explained in submitted papers.

    • While some reviewers find the length of a submitted paper too long, others even recommend to extend the same paper.

    • Some reviewers criticise the vocabulary used by authors without having looked up the criticised terms in an English dictionary themselves.

    • Some reviewers demand an extensive collection of references even when novelties are reported for the first time, i.e. when the corresponding topic has not been covered by other publications for whatever reasons so far. At the same time, these reviewers claim that the novelty disclosed to them in the submitted paper was “not new” without providing at least one single reference as proof. These reviewers ignore completely how difficult it is to prove a novelty on the one hand and how easy it is to prove the well-known on the other hand. – Now, a reader could object that “novelty” often depends on the individual perspective. So let me give you an example: A paper based on a recently granted patent was submitted subsequently to two journals (Elsevier and IEEE) in order to be reviewed by 7 reviewers. The patent was not listed as a reference in that paper. 5 reviewers considered the subject of the paper to be “not new” and none of those 5 reviewers provided any reference as proof.

    • Some reviewers indiscriminately demand device simulations or special measurements, although the presented results have been obtained by other well-established methodologies and are absolutely plausible and consistent.

    • Some well-known journals apply unequal criteria as the documentation of the instruments and methodologies used are concerned. In some published papers reporting results obtained by circuit simulation, the simulator used is not even mentioned, whereas in other submitted papers the results obtained by a specified state-of-the-art circuit simulator with production-level models are considered by the reviewer to be "crude".

    • Some reviewers try to influence the topic or character of a paper. E.g. by the requirements they make, they urge the author(s) of an original research paper to revise that paper in order to become a tutorial.

    • Some reviewers make far fetched speculations and fabricate reasons in order to deliberately reject an unwanted paper.

    • To some reviewers and editors, the reputation or (assumed) ethnic background of the author(s) appears to be more important for their assessment than the contents of the submitted paper.

    Similar experiences have been summarised a few years ago by Richard Smith (“The peer review drugs don’t work”, 28 May 2015).

    All renowned journals and conferences point out the ethical guidelines authors have to comply with. However, as reviewers are concerned, it seems to me that they can act just as they want. While such journals and conferences are eager to point out the consequences for authors, who do not adhere to their ethical guidelines, it is very difficult for an author to submit a complaint about an unqualified review in order to have it independently and objectively assessed and (if needed) corrected. And if an author succeeds to submit a complaint about an unqualified review, the chances are good that the complaint is not carefully read and that the corresponding reviewer is even defended by the editor or the TPC chair.

    As a consequence, I doubt that nowadays the peer review still helps “to maintain quality standards, improve performance, and provide credibility”. Quite the contrary, driven by the longing for recognition and the prestige of a high number of publications on the one hand and protected by the anonymity of the peer review process and by the lack of transparency and checks on the other hand, the original purpose of the peer review is more and more undermined by conceited and overconfident reviewers, busy and uncritical editors, and authority-biased and profit-driven publishers. This is no road to excellence, but to mediocrity.

    What can be done in order to improve the quality of the peer review? Here are a couple of proposals that you may want to comment or extend:

    • Single-blind reviews are unbalanced and unfair. They should be replaced at least by double-blind or triple-blind reviews or even better by open reviews.

    • For transparency reasons, reviewer guidelines should be added to the author’s kit of journals and conferences.

    • As authors are required to provide references or conclusive proof for their statements and conclusions, reviewers should also be required to provide references or conclusive proof when they disagree. This is not only fair, but allows to resolve possible misunderstandings on both sides. If reviewers do not provide references or conclusive proof, their comments should be disregarded by the responsible editor or TPC chair.

    • In order to prevent us from authority bias and personality cult, the number of extended abstracts a reviewer is allowed to review per conference should be limited to 10 and the number of full-sized papers a reviewer is allowed to review per journal or conference should be limited to 1 per month. Furthermore, scientists and academics should not be allowed to participate as reviewers in more than 4 and as editors or TPC chairs in more than 2 journals or conferences per year.

    • In order to prevent journals and conferences from ethnic bias, editors and TPC chairs should select reviewers with diverse ethnic background to review submitted papers and extended abstracts.

    Regards

    Will

     

  4. On 9/11/2018 at 7:13 PM, eggman2 said:

    If you are, I got some questions for you?

    I will start off by asking what kind of laws/principles/theorems do you deal with in real life circuit analysis?

    What kind of software do you use?

    Does the computer do most of the work?

    Your questions seem to address the work of circuit designers. If not, please provide more details. If so, here are some answers:

    Analog circuit design:

    1. Experienced circuit designers are familiar with the general characteristics of the components and elementary building blocks being used and (first) analyse circuits on a qualitative/functional basis. This kind of analysis is also but only indirectly based on fundamental laws as e.g. Ohm's law, Kirchoff's law, etc.
    2. Variations of SPICE are often used to design circuits. For the design of integrated circuits also SPECTRE and SABER are used.
    3. The engineer develops and adjusts the circuit topology. The computer does the time-consuming quantitative computations in order to help the designer to size the circuit components to meet the given specifications. Due to the complexity of analog designs, there are still no automated expert systems which allow general analog circuits to be synthesized completely by computer software.

    However, in digital circuit design, the circuit designer programs the logical functions and limiting constraints to be synthesized by computer software. As a result, the work done by the computer is siginificantly larger in digital designs than in analog designs.

    Mixed-mode circuits use both analog and digital circuits. Hence, their design methodology ranges between those of analog and digital circuits.

  5. On 10/10/2018 at 12:56 AM, ScienceNostalgia101 said:

    So it sounds to me like computer programming is the more crucial of engineering-related skills for the modern workplace than engineering physics or engineering chemistry. Why the high school emphasis on physics and chemistry, then?

    Computer programming is very useful in science and engineering because it helps scientists to better understand the world we are living in and engineers to construct things (whatever they are) in order to make our world a better place (hopefully :rolleyes:). Without scientists and engineers there would be no computers to program.

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.