My institution, and many others, use impact factors instead of a holistic evaluation of individual researchers. Here are a few reasons why I think this is ultimately anti-scientific:

  1. Impact factors encourage journals to accept articles that will be highly cited in the short-term.
  2. They encourage journals to reject unconventional, original, or still unknown topics of research that are unlikely to be cited quickly.
  3. They discourage the publication of negative findings (Littner 2005)
  4. The impact factor leads journals to favour publishing industry-lead research (Jefferson 2009; Goldacre 2009).
  5. They distort the work of a researcher (Alberts 2013): As a non-permanent member of staff, I am forced to choose where to publish based on impact factors, rather than more relevant criteria, such as discipline, manuscript length, readership, open access… 

In 2012 researchers got together in San Francisco to condemn the use of the impact factor in evaluating individual researchers. Other researchers can now go and sign up to the recommendations made by the Declaration on Research Assessment (DORA).

San Francisco Declaration on Research Assessment

I encourage you to read this editorial by the editor-in-chief of Science, which is a more lengthy and articulate explanation for why impact factors are bad for science.

By Michelle Kelly-Irving


Littner, Y., F. B. Mimouni, et al. (2005). “Negative results and impact factor: a lesson from neonatology.” Archives of Pediatrics and Adolescent Medicine 159(11): 1036-1037.

Jefferson, T., C. D. Pietrantonj, et al. (2009). “Relation of study quality, concordance, take home message, funding, and impact in studies of influenza vaccines: systematic review.” BMJ (Clinical Research Ed.) 338.

Ben Goldacre’s Guardian article on impact factors from Feb 2009:

Alberts, B. (2013). “Impact Factor Distortions.” Science 340(6134): 787.

Other related articles