On Sun, Oct 19, 2008 at 8:16 PM, Heather Morrison <heath...@eln.bc.ca> wrote:

> Biology - species.  There will always, of necessity, be a limited
> pool of scientists studying any one species in danger of extinction.
> Do articles and journals in these areas receive fewer citations?  If
> so, what happens if we reward scholars and journals on the basis of
> metrics?  Will these researchers lose their funding?  Will journals
> that publish articles in this area lose their status?

These are nonproblems. Compare like with like, and use multiple
metrics.

> Literature - authors.  There are many researchers studying
> Shakespeare.  A lesser-known local author will be lucky to receive
> the attention of even one researcher.  In a metrics-based system, it
> seems reasonable to hypothesize that this bias will increase, and the
> odds of studying local culture decrease.

What bias? If a lesser-known researcher does good work, it will be
used, and this will be reflected in the metrics.

Compare like with like, and use multiple metrics.

> History - the local versus the global.  A reasonable hypothesis is
> that historical articles and journals with broader potential
> readership are likely to attract more citations than locally-based
> historical studies.  If this is correct, then local studies would
> suffer under a metrics-based system.

Compare like with like, and use multiple metrics.

> Medicine - temporary importance:  AIDS, bird flu, SARS, are all viral
> diseases, horrible diseases and pandemics or potential pandemics.  Of
> course, our research communities must prioritize these threats in the
> short term.  This means many articles on these topics, and new
> journals, receiving many citations.  Great stuff, this advances our
> knowledge and may have already prevented more than one pandemic.  But
> what about other, less-pressing issues, such as the resistance of
> bacteria to antibiotics and basic research?  In the short term, a
> focus on research usage metrics helps us to prioritize and focus on
> the immediate danger.  In the long term, if usage metrics lead us to
> undervalue basic research, we could end up with more pressing dangers
> to deal with, such as rampant and totally untreatable bacterial
> illnesses, and less basic knowledge to help us figure out what to do.

Compare like with like, and use multiple metrics: Basic research with
basic research; applied with applied, theme-driven with theme-driven.

And there are other metrics besides usage metrics.

> Cost-efficiency metrics, such as average cost per article, is a tool
> that can be used to examine the relative cost-effectiveness of
> journals.  In the print world, the per-article cost for the small,
> not-for-profit society publishers has often been a small fraction of
> the cost of the larger commercial for-profit publishers, often with
> equal or better quality.  If university administrators are going to
> look at metrics, why not give thought to rewarding researchers for
> seeking publishing venues that combine high-quality peer review and
> editing with affordable costs?

The big issue is not journal evaluation of journal cost-effectiveness
but research and researcher evaluation and cost-effectiveness. (Forget
about the JIF and the rating of journals: they are just one -- extremely
blunt -- tool among many.)

Stevan Harnad

Reply via email to