Jump to content

Citations and Impact Factors


ajb

Recommended Posts

In science it generally seems that an accepted measure of ones work is the number of citations to your papers. Couple this with the impact factor of the journals the papers appears in and you get an idea of the scientific worth of the researcher. Or do you?

 

In mathematics it can take a while for a paper to get picked up on and get a good number of citations. This affects the journals impact factor. Also, one can write papers that are subtlety incorrect so that people will find the holes and plug them, citing your work every time. Review papers also get cited quite a lot. Then, papers by Einstein, Dirac and Feynman that have been very influential in the long term no longer get cited. Their work is now "common knowledge".

 

So, if the number of recent citations in higher impact journals is not a good measure of scientific productivity what is? Can ones work really be judged like that at all?

 

I do not claim to have a better measure, but I am interested in what you all think. So, please let us know.

Link to comment
Share on other sites

As a system it is open to manipulation, enforces cliques, and equates being in the majority with being correct; all of which are dangerous. But as a rating process it is objective, systematic and relies on the actions of scientific community rather than arbitrary judgment. Just as democracy is a terrible system and a benevolent dictatorship is preferable so it is in this situation. An honest panel of the great and the good wisely distributing honours, promotions, grants and funding without fear or favour on the sole basis of academic merit would be marvellous; but we know this could never happen in the real world. So we accept a halfway house, a muddy compromise with both objective and subjective measures that satisfies no one, yet is the best of a bad bunch.

Link to comment
Share on other sites

So, if the number of recent citations in higher impact journals is not a good measure of scientific productivity what is? Can ones work really be judged like that at all?

My advisor of my diploma thesis once told me that in experimental high-energy physics, the relevant measure is usually the number of talks given (invited ones probably being the most relevant), not the number of publications (remember that some hep papers have O(100) authors). I can well imagine the number of invited talks to be a reasonable measure. "Awards" may be another, but the value of awards certainly fluctuate more than the values of publications in standard journals. Plus, you'd probably define a measure that is non-zero only for those who are already famous and influential to a degree that makes a measure unnecessary/useless. In practice, the most important criterion in my experience is the opinion that colleagues have of the respective scientist - which is of course not a measure in the mathematical sense.

 

In science it generally seems that an accepted measure of ones work is the number of citations to your papers. Couple this with the impact factor of the journals the papers appears in and you get an idea of the scientific worth of the researcher. Or do you?

 

In mathematics it can take a while for a paper to get picked up on and get a good number of citations. This affects the journals impact factor.

I don't think that the impact factor of the journals you publish in matters that much. Sure, if you apply for some EU grant, then you'd better have a Nature publication somewhere on your list. But other than that, I don't think impact factor counts as long as you publish in the standard journals for your field, and not some esoteric "we publish everything" ones.

 

Also, one can write papers that are subtlety incorrect so that people will find the holes and plug them, citing your work every time.
Why would that be more than one cite? I don't know the working habit in math, but I never bothered writing a paper just to point out an obvious error in other publications; pointing it out via a mail to the author seems like the more sensible option. I find it hard to imagine that someone would knowingly publish erroneous material and want to attract a lot of attention to this error.

 

Review papers also get cited quite a lot.

Being asked to write a review paper is quite an honor; no-names are not invited to this.

 

Then, papers by Einstein, Dirac and Feynman that have been very influential in the long term no longer get cited. Their work is now "common knowledge".

All three are beyond the level where they'd be judged by their citation count. In my field, some things are common knowledge within the field (but not necessarily outside of it), and the respective publications (from the seventieth) still get cited, despite the authors already having won prestigious prizes and being the big names in the field.

 

My personal opinion is that citation count isn't that bad as a measure of impact. I also consider the related h-index as a reasonable measure for a scientist's relative influence on the field (note that this statement implies that by these measures someone with only a single super-well-cited publication has more impact on the field that someone with several average-cited ones, but that the latter is more influential; I see no contradiction there). Of course, mathematicians may not like these measures much because they are pulling the short straw. But one shouldn't compare citation counts between different fields, and I don't think that anyone really does.

Edited by timo
Link to comment
Share on other sites

Why would that be more than one cite? I don't know the working habit in math, but I never bothered writing a paper just to point out an obvious error in other publications; pointing it out via a mail to the author seems like the more sensible option. I find it hard to imagine that someone would knowingly publish erroneous material and want to attract a lot of attention to this error.

 

I have been told this can happen. For example, leaving one critical lemma not fully proved could lead to a citation or two. I too doubt that a very obvious mistake or omission would be very wise.

 

 

 

My personal opinion is that citation count isn't that bad as a measure of impact.

 

Still time for my work so far to get cited, I hope!

Link to comment
Share on other sites

Ah, how to quantify scientific productivity, the eternal question. Short answer: there is no accepted means.

It is extremely field dependent. In engineering and bioinformatics more publications are expected as e.g. in biological and medical sciences. Publishing in high-impact (for your field) journals is especially relevant for review panels that othwerwise are not familiar with your work but want to get an idea whether the stuff you do is of interest to a given scientific community. As a rule of thumb original work outclass reviews. Especially as not all reviews are invited. There are very notable exceptions, of course.

In the end, even within a field productivity is not easily compared. Often the h-index is a guideline, but in the end the evaluation of individual scientists depends heavily on context.

 

For many evaluations of scientific merits the number of publications and citations may not be that much relevant, unless you are clearly above or below average. In the end it depends a lot on the context of the evaluation. In hiring processes, for instance, candidates may be ranked according to h-indices, but the ideal candidate is often not the one with the highest citations.

 

From what I heard mathematically paper take a long time to get cited. They tend to get a torrent of citations once someone found an application for it, for instance. But the turnover is often counted in decades. In other fields you can expect citations in the first few years after publications and then (almost) never again. But it can be very fickle. I published something 5 like years back that was never cited until new groups started to work on it about a year ago. Since then the citations came rolling in...

Edited by CharonY
Link to comment
Share on other sites

This could be a stupid idea, but has a journal ever consider installing a rating system? I was thinking of something all the lines of having groups of registered users that consists of people with some level of credibility, and these users could use a simple rating system- like a 5 star system- to rate a paper they have read. I know this would by no means be a great system, but it might provide administrators with a better understanding of what your peers think of your work. This would subvert the current method of using citations as a level of quality.

Link to comment
Share on other sites

There are things like that out there, too. Faculty of 1000, for instance, consists of well-established scientists that pick out papers they find remarkable and rate them. Also many journals have a news or top pick section in which they highlight remarkable papers. However, only a small fraction of the scientific body will be reviewed. Similar to, say, a nature publication this will highlight individual papers, but will not be informative for the vast majority of work that is being done.

Link to comment
Share on other sites

From what I heard mathematically paper take a long time to get cited. They tend to get a torrent of citations once someone found an application for it, for instance. But the turnover is often counted in decades. In other fields you can expect citations in the first few years after publications and then (almost) never again. But it can be very fickle. I published something 5 like years back that was never cited until new groups started to work on it about a year ago. Since then the citations came rolling in...

 

I have one preprint that has been cited in one other preprint. (I am not counting any self citations here). People are looking at my work, but the citations are not forthcoming. So, I would not want to try to rate my work by using citations, and I hope that at this relativity early stage of my career no-one else would.

 

I take just getting work published as a measure.

 

Thanks for your input.

Link to comment
Share on other sites

Perhaps I should clarify that when considering it a good measure of impact on the scientific community I had the average mid- to end-career scientist in mind (hey, you even gave examples of people who are long dead); and I was also not considering cases where grants or positions are given out (where human evaluation is indicated), but a sketch for defining a measure (in the the sense of a rule that applies a number to an object), that is half-ways appropriate.

Edited by timo
Link to comment
Share on other sites

Perhaps I should clarify that when considering it a good measure of impact on the scientific community I had the average mid- to end-career scientist in mind (hey, you even gave examples of people who are long dead); and I was also not considering cases where grants or positions are given out (where human evaluation is indicated), but a sketch for defining a measure (in the the sense of a rule that applies a number to an object), that is half-ways appropriate.

 

 

Ok, thanks for the clarification.

 

I gave examples of old names just to point out that the number of direct citations is not always proportional to the impact. However, as you state, it gives an indication for most "mortals" some where in the middle of their career. Thanks for making that clearer.

Link to comment
Share on other sites

For junior scientists (i.e. one the postdoctoral level) the number of citations is usually not a good measure. The h-index mostly only comes in when hiring faculty. Sometimes as a rule of thumb the number of papers in very similar fields can be compared. However, for most intents and purposes (e.g. getting a postdoctoral position or postdoc grants), even the absolute number has usually (relatively) little relevance.

 

Depending on your field often the technical abilities are more sought after, or the topic of your phd. Essentially postdocs are not ranked per se, but they would more likely ask the advisor whether someone could get something done (within a project).

Theoretical fields may follow slightly different rules than experimental ones, though.

Link to comment
Share on other sites

Depending on your field often the technical abilities are more sought after, or the topic of your phd.

 

Most of the jobs I have looked at and applied for are quite specific about the areas of knowledge they require. Sometimes this can be very broad and not much help to me in deciding if I stand a chance at getting the position.

 

They often ask for publications, but the number is judged by the stage of the applicant career. I guess straight from PhD there maybe no publications, but after a postdoc or two people will expect a few papers.

 

I have papers on original ideas that I worked on independently. How these will be judged I have no idea.

Edited by ajb
Link to comment
Share on other sites

Well, there is a field-dependent minimum most expect. In most experimental fields one or two publication during your PhD is expected, more in engineering and computational fields, for instance. But even a single publication on the right topic can be sufficient. No publication could be assessed as a warning sign. Postdoc positions are often tied to existing or planned grants, hence (in contrast to faculty positions that is) most are tied to a certain topic/technique. Thus one has to demonstrate abilities in that area, which is usually done with having a publication in that area. The numbers are secondary. Demonstrating the ability to work independent is a plus, however most are more interested that one can contribute meaningfully to a project or, even better, provide knowledge in a field in which the PI wants to expand.

 

But again, this is mostly for postdoc levels. Service or faculty positions follow different rules, for instance.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.