An earlier version of this appeared on this list.

http://www.the-scientist.com/?articles.view/articleNo/36575/title/Opinion--Reviewing-Reviewers/

Opinion: Reviewing Reviewers

Science needs a standard way to evaluate and reward journal reviewers.

By David Cameron Duffy | July 19, 2013

   - 0 
Comments<http://www.the-scientist.com/?articles.view/articleNo/36575/title/Opinion--Reviewing-Reviewers/#articleComments>
   - 
Print<http://www.the-scientist.com/?articles.view/articleNo/36575/title/Opinion--Reviewing-Reviewers/#>
   -


   -
   -
   
<http://pinterest.com/pin/create/button/?url=http%3A%2F%2Fwww%2Ethe%2Dscientist%2Ecom%2F%2F%3Farticles%2Eview%2FarticleNo%2F36575%2Ftitle%2FOpinion%2D%2DReviewing%2DReviewers%2F&media=http%3A%2F%2Fphotos.the-scientist.com%2FarticleImages%2F36000%2F36575-1-t.jpg&description=Opinion%20-%20Reviewing%20Reviewers%20on%20The%20Scientist>
   -


   - Link this
   - Stumble
   - Tweet this

WIKIMEDIA, AREYN
<http://commons.wikimedia.org/wiki/File:Board-Meeting.png>Refereeing
or reviewing manuscripts for scientific journals is at the heart of
science, despite its occasional imperfections. Reviewing is a check of
quality, originality, impact, and even honesty for papers submitted to
scientific journals. Unfortunately, referees are sort of like sperm donors:
they are anonymous and their pleasure, if any, is in the process, not the
result. No one acknowledges their contributions, except perhaps in small
print at the back of a journal at the end of the year. Who in their right
mind would want to referee? It takes lots of time to do well and gets no
credit. The result is arefereeing
crisis<http://onlinelibrary.wiley.com/doi/10.1111/j.1461-0248.2008.01276.x/abstract>
.

Various methods have been suggested to improve the situation. Some of these
suffer from an approach that punishes reviewers for poor performance,
rather than rewarding them for their hard work. Others involve complicated
systems of payment or reciprocal altruism where reviewers are rewarded with
access to journals that they may or may not be interested in submitting
papers to.

I believe being asked to referee reflects one’s true standing in a field.
Journal editors will always try to get the most knowledgeable and competent
referees possible. I would suggest we build on existing impact measurements
to encourage enlightened self-interest.  For authors, we have a measure of
impact, such as the commonly used
h-index<http://www.pnas.org/content/102/46/16569.full.pdf+html>,
a reflection of publications and citations. For journals, we have impact
factors. While such measures for both journals and individual scientists
can be misused and in any event should be taken with a grain of
salt<http://am.ascb.org/dora/>,
I suggest something similar would be a more benevolent system for referee
metrics.

Journals could produce an annual list of reviewers and the number of times
each reviewed. The sum of the number of reviews by individual referees,
multiplied by the impact factor of the journals for which they reviewed,
should reflect their standing in the field. Reviewing in a top journal like
*Science *or *Nature *would earn the highest scores, but such opportunities
are necessarily less common. Reviewing manuscripts submitted to mid-rank
but more focused journals would therefore be more likely drive individual
scores. Finally, reviewing for low-ranking journals would not boost scores
much but, as at present, could be considered a moral obligation to the
scientific community. Additional indices could correct for performance
though the years or proportion of reviews for top journals.

Assigning a specific score to evaluate a scientist’s contribution as a
manuscript reviewer should encourage scientists to improve their standings,
which can be done by more reviewing or by being asked to review by higher
impact journals. Editors can then exploit this to improve their stable of
referees. Academic deans and other administrators, obsessed with the
quantitative, will latch on like flies onto road kill. The result would be
competition for opportunities to review rather than competition among
editors for a limited number of able and willing reviewers.

Of course, such a system could be gamed, but editors could choose not to
count reviews unless they reach a certain standard of excellence, while at
the same time taking care not to discourage researchers from reviewing at
all for their journal.

We can continue to bemoan the state of reviewing, and dream up sticks with
which to beat reviewers into helping, or we can come up with carrots. The
system I suggest here is cheap and appeals to both our better and worse
angels, motivating researchers with the carrot that matters most:
recognition for their standing and for their contributions.

*David Cameron Duffy is a frequent referee for a variety of journals and is
the former editor of**Waterbirds. He is an ecologist who works on seabirds
and on perturbations in natural ecosystems. An earlier version of this
commentary appears on
Ecolog-L<http://www.mail-archive.com/ecolog-l@listserv.umd.edu/msg24801.html>
**.*
-- 

Pacific Cooperative Studies Unit
Botany
University of Hawaii
3190 Maile Way
Honolulu Hawaii 96822 USA
1-808-956-8218

Reply via email to