Re: UK Research Assessment Exercise (RAE) review

2005-03-19 Thread Stevan Harnad
Michael Day, Institutional repositories and research assessment.
http://www.rdn.ac.uk/projects/eprints-uk/docs/studies/rae/rae-study.pdf

ABSTRACT: This study concerns the potential role of institutional
repositories in supporting research assessment in universities
with specific reference to the Research Assessment Exercises in
the UK. After a brief look at research evaluation methods, it
introduces the UK Research Assessment Exercise (RAE), focusing
on its role in determining the distribution of research funding,
the assessment process itself, and some concerns that have been
raised by participants and observers. The study will then introduce
institutional repositories and consider the ways in which they might
be used to enhance the research assessment process in the UK. It will
first consider the role of repositories in providing institutional
support for the submission and review process. Secondly, the paper
will consider the ways in which citation linking between papers in
repositories might be used as the basis for generating quantitative
data on research impact that could be used for assessment. Thirdly,
this study will consider other ways in which repositories might
be able to provide quantitative data, e.g. usage statistics or
Webometric link data, which may be able to be used - together with
other indicators - to support the evaluation of research.

Prior AmSci Threads on the Topic:

"UK 'RAE' Evaluations" (2000)
http://www.ecs.soton.ac.uk/~harnad/Hypermail/Amsci/1016.html

"UK Research Assessment Exercise (RAE) review" (2002)
http://www.ecs.soton.ac.uk/~harnad/Hypermail/Amsci/2323.html

"Written evidence for UK Select Committee's Inquiry into Scientific
Publications" (2003)
http://www.ecs.soton.ac.uk/~harnad/Hypermail/Amsci/3263.html


Re: UK Research Assessment Exercise (RAE) review

2003-01-07 Thread Adrian Smith
http://education.guardian.co.uk/higher/books/story/0,10595,869621,00.html
The "Education" section of The Guardian 7th Jan 2003 includes Public Library
of Science [page 9] and RAE Review [pp.12-13]
see also http://www.rareview.ac.uk
http://education.guardian.co.uk/higher/comment/story/0,9828,869578,00.html

Adrian Smith


-Original Message-
From: Stevan Harnad [mailto:har...@ecs.soton.ac.uk]
Sent: 12 December 2002 20:43
To: jisc-developm...@jiscmail.ac.uk
Subject: Re: UK Research Assessment Exercise (RAE) review


Below are excerpts from some articles that appeared in the Times Higher
Education Supplement on December 6 2002 about potential changes in the
UK Research Assessment Exercise. The articles are not online, and I have
only excerpted the passages pertinent to my own proposal for making
the RAE simpler and cheaper, and at the same time more explicit and
accurate. [The article doesn't mention an essential component of my
proposal -- the online standardized RAE Curriculum Vitae for every
researcher http://www.eprints.org/self-faq/#research-funders-do which
would include other measurable indicators of research impact, such as
numbers of research students and grants (which are themselves correlated
with citation impact): The scientometric research assessment tools are
described at: http://opcit.eprints.org/ ]

Excerpts from:
RAE review by Mandy Garbner
Times Higher Education Supplement 06 December 2002

The research assessment exercise review panel wants to see radical
change...  Besides funding, the RAE has been criticised for the way
assessment is carried out...  Sir Gareth Roberts, who is chairing
the Joint Funding Bodies' review of the RAE, says... that few people
contributing to the consultation on its future have come up with
radical ways of changing it... [T]he review wants something more
radical than tinkering [such as changing the way panel members are
appointed]. It focuses on creating a "less burdensome assessment
method", rather than on funding.

One of the more radical proponents of change is Southampton
University's Stevan Harnad. He would like to see UK research made
"accessible and assessable continuously online" rather than in
a four-yearly process, as happens now. He argues that research in
every discipline can continue to be refereed, but can be made freely
accessible to all academics if they archive their own refereed
research online, bypassing the access tolls charged by research
publications. He says software can also determine research impact,
that is, how much other researchers cite research and build on it...

Harnad adds that the software to do this is available. All it
needs is for the RAE to encourage the process "by mandating that
all UK universities self archive all their annual refereed research
in their own e-print archives". The benefits, he says, include "a
far more effective and sensitive measure of research productivity
and impact at far less cost" to both universities and the RAE and a
strengthening of the uptake and impact of UK research by increasing
its accessibility. "The UK is uniquely placed to move ahead with
this and lead the world," Harnad says, "because the RAE is already
in place."

The public consultation on the future of the RAE closed last week. The
review panel is assessing the contributions and will report shortly.

David Clark: "The time involved in the process is
mind-boggling. Universities set up committees between each RAE to test
scenarios, model possible outcomes and consider reorganisations to
maximise potential income from the next exercise. The preparation of
submissions dominates universities in the run-up to the RAE. Yet the
outcome is so strongly correlated with research income from external
funders that the grading could be achieved at the press of a button."

---

Prior articles on this topic (2001)

Harnad, S. (2001) "Research access, impact and assessment." Times Higher
Education Supplement 1487: p. 16.
http://cogprints.soton.ac.uk/documents/disk0/00/00/16/83/

Lawrence, S. (2001b) Free online availability substantially increases a
paper's impact. Nature Web Debates.
http://www.nature.com/nature/debates/e-access/Articles/lawrence.html


Re: UK Research Assessment Exercise (RAE) review

2002-12-12 Thread Stevan Harnad
Below are excerpts from some articles that appeared in the Times Higher
Education Supplement on December 6 2002 about potential changes in the
UK Research Assessment Exercise. The articles are not online, and I have
only excerpted the passages pertinent to my own proposal for making
the RAE simpler and cheaper, and at the same time more explicit and
accurate. [The article doesn't mention an essential component of my
proposal -- the online standardized RAE Curriculum Vitae for every
researcher http://www.eprints.org/self-faq/#research-funders-do which
would include other measurable indicators of research impact, such as
numbers of research students and grants (which are themselves correlated
with citation impact): The scientometric research assessment tools are
described at: http://opcit.eprints.org/ ]

Excerpts from:
RAE review by Mandy Garbner
Times Higher Education Supplement 06 December 2002

The research assessment exercise review panel wants to see radical
change...  Besides funding, the RAE has been criticised for the way
assessment is carried out...  Sir Gareth Roberts, who is chairing
the Joint Funding Bodies' review of the RAE, says... that few people
contributing to the consultation on its future have come up with
radical ways of changing it... [T]he review wants something more
radical than tinkering... It focuses on creating a "less burdensome
assessment method", rather than on funding.

One of the more radical proponents of change is Southampton
University's Stevan Harnad. He would like to see UK research made
"accessible and assessable continuously online" rather than in
a four-yearly process, as happens now. He argues that research in
every discipline can continue to be refereed, but can be made freely
accessible to all academics if they archive their own refereed
research online, bypassing the access tolls charged by research
publications. He says software can also determine research impact,
that is, how much other researchers cite research and build on it...

Harnad adds that the software to do this is available. All it
needs is for the RAE to encourage the process "by mandating that
all UK universities self archive all their annual refereed research
in their own e-print archives". The benefits, he says, include "a
far more effective and sensitive measure of research productivity
and impact at far less cost" to both universities and the RAE and a
strengthening of the uptake and impact of UK research by increasing
its accessibility. "The UK is uniquely placed to move ahead with
this and lead the world," Harnad says, "because the RAE is already
in place."

The public consultation on the future of the RAE closed last week. The
review panel is assessing the contributions and will report shortly.

David Clark: "The time involved in the process is
mind-boggling. Universities set up committees between each RAE to test
scenarios, model possible outcomes and consider reorganisations to
maximise potential income from the next exercise. The preparation of
submissions dominates universities in the run-up to the RAE. Yet the
outcome is so strongly correlated with research income from external
funders that the grading could be achieved at the press of a button."

---

Prior articles on this topic (2001)

Harnad, S. (2001) "Research access, impact and assessment." Times Higher
Education Supplement 1487: p. 16.
http://cogprints.soton.ac.uk/documents/disk0/00/00/16/83/

Holmes, Alison & Oppenheim, Charles (2001) Use of citation analysis to
predict the outcome of the 2001 Research Assessment Exercise for Unit of
Assessment (UoA) 61: Library and Information Management
http://informationr.net/ir/6-2/paper103a.html

Jaffe, Sam (2002) Citing UK Science Quality: The next Research
Assessment Exercise will probably include citation analysis.
The Scientist 16(22): 54, Nov. 11, 2002
http://www.the-scientist.com/yr2002/nov/prof1_02.html

Lawrence, S. (2001) Free online availability substantially increases a
paper's impact. Nature Web Debates.
http://www.nature.com/nature/debates/e-access/Articles/lawrence.html

Oppenheim, C., (1997) The Correlation Between Citation Counts and
the 1992 Research Assessment Exercise Ratings for British Research in
Genetics, Anatomy and Archaeology, Journal of Documentation , 53(5),
1997, pp 477-487

Oppenheim, Charles (1996)  "Do citations count? Citation indexing and
the research assessment exercise," Serials, 9:155-61, 1996.

Smith, A. & M. Eysenck, "The correlation between RAE ratings and citation
counts in psychology," June 2002, available online at
http://psyserver.pc.rhbnc.ac.uk/citations.pdf.




Re: UK Research Assessment Exercise (RAE) review

2002-12-03 Thread Stevan Harnad
On Fri, 29 Nov 2002, Linda Humphreys wrote:

> At the University of Bath, academic staff are well aware of
> the costs and barriers to access of traditional journals,
> and I think the same would be true of most similar
> institutions. Budgets for journals have been tight for
> years, and we librarians liaise very closely with academic
> staff over cancellations, and the purchase of electronic
> subscriptions.

Researchers know their libraries have serials budget problems.
But they definitely do not know the causal connection between
access and impact and what can be done about it, otherwise they
would most definitely be doing that something.

On the contrary, your survey confirms how under-informed and
ill-informed researchers really are -- hence how important will be BOAI's
efforts to inform them about the benefits of open access, the means of
attaining them, and the costs of not attaining them.

> We carried out a survey earlier this year of academic staff
> views of e-prints.  Only 74 replied - perhaps that is par
> for the course, or perhaps it backs up Jan's assertion that
> they are largely ignorant of open access/e-print issues?

I would say that 74 responses from all of the University of Bath
is evidence that researchers don't know what is at issue (nor why
they should be filling out surveys about it!).

(Here's another survey: http://www.eprints.org/results/m ).

> The majority of replies were from scientists.  You might be
> interested in a few results?
>
> Out of the 74 respondents:
> - 11 had posted articles on personal or departmental web
> pages
> - 3 had posted to an e-print server
> - 11 had used an e-print server for research and/or teaching

It would be interesting to do the test in reverse. For example,
http://citeseer.nj.nec.com/cs found 53 computer science papers
on the Web from University of Bath...

> They appeared to be generally well-informed of the problems
> surrounding self-archiving:
> - 62 expressed concern that if they posted a pre-print they
> would not be able to get the work published in their chosen
> journal

Well-informed? But the above is precisely the uninformed concern
that most researchers have been reflexively voicing without even having
thought about it, let alone having sought the actual data, for at least
a decade now:
http://www.lboro.ac.uk/departments/ls/disresearch/romeo/index.html
http://www.eprints.org/self-faq/#publishers-do

> - 60 said that copyright issues would be an important
> factor in any decision not to self-archive post-prints

Again, in weighing whether this is evidence of being well-informed or
ill-informed, it might be a good idea that the above two worries have
been voiced uninterruptedly for more than 10 years -- often enough to
have made their way into the self-archiving FAQ -- but with no sign
of those who expressed the worry having any awareness of the replies
to the worry, or to the fact that things have been changing, rapidly,
across the years: http://www.eprints.org/self-faq/#10.Copyright

> - 59 were concerned about quality and peer review issues
http://www.eprints.org/self-faq/#7.Peer

I regret that these surveys of uninformed opinion lead only to the blind
leading the blind!

> - 60 were concerned about plagiarism

Another popular item on the longstanding worry list (there are
at least 23 more!):
http://www.eprints.org/self-faq/#11.Plagiarism

> There seem to be several issues surrounding quality and
> peer review, including the common misconception about
> self-archiving being an alternative to self-publishing,

Indeed. But is the fact that so many of those who were surveyed
give evidence of subscribing to this common misconception again
evidence that they are well-informed?
http://www.ecs.soton.ac.uk/~harnad/Tp/resolution.htm#1.4

Is it not rather evidence of the fact that despite all that has been
said and written by those who are somewhat better informed about
such matters, an open-access information campaign still has its
work cut out for it, and might be the one thing we need most right
now, if open access is to be ushered in while we are still compos
mentis and in a position to benefit from it?

> and impact factors (which have been discussed at length on this
> list).

That is the most ironic symptom of well-informedness of all! For
maximizing research impact is precisely what open access is about.
Yet the usual self-archiving/self-publishing conflation allows these
respondents to keep believing that open access means giving up
publishing in their high impact-factor journals -- rather than what
it really means: keeping their (high-impact-journal) cake and eating it too
(by topping up that impact with perhaps an order of magnitude more
from opening access to the very same article).
http://www.nature.com/nature/debates/e-access/Articles/lawrence.html

> Also, a number of staff have commented that they
> would not wish to include their work in an archive which
> contained non-refereed material (pre-prints), the
> perception be

Re: UK Research Assessment Exercise (RAE) review

2002-12-01 Thread Jan Velterop
Linda,

Much as I would like to agree with you that scientists do understand the
issues, I'm afraid that the survey results you offer is rather in support of
my suspicion that they don't, really.  For those who see the benefits and
want open access for their articles, the concerns listed are either
irrelevant, having little to do with open access, or relatively easily
surmountable.

As for the Springer Publishers copyright form in relation to US government
research, this is not a question of the US retaining copyright, but of US
government  research results not being copyrightable at all (effectively
making them public domain and therefore easily depositable - if that's a
word - in self- or institutional archives).

Best,

Jan Velterop

-Original Message-
From: Linda Humphreys
To: american-scientist-open-access-fo...@listserver.sigmaxi.org
Sent: 11/29/02 3:44 PM
Subject: Re: UK Research Assessment Exercise (RAE) review

On Fri, 29 Nov 2002 14:33:46 + Jan Velterop wrote:
>
> I'm not so sure that they  do understand the
> concepts and benefits of open access. That is simply
> because they haven't really been exposed to them. The
> librarians have been very good in making it seem to many scientists as
if
> access to their desired journal titles is free and easy. The
researchers
> don't feel the pain. To them, as readers, it may often seem as if
large
> parts of the literature are open access.


At the University of Bath, academic staff are well aware of
the costs and barriers to access of traditional journals,
and I think the same would be true of most similar
institutions.  Budgets for journals have been tight for
years, and we librarians liaise very closely with academic
staff over cancellations, and the purchase of electronic
subscriptions.

We carried out a survey earlier this year of academic staff
views of e-prints.  Only 74 replied - perhaps that is par
for the course, or perhaps it backs up Jan's assertion that
they are largely ignorant of open access/e-print issues?
The majority of replies were from scientists.  You might be
interested in a few results?

Out of the 74 respondents:
- 11 had posted articles on personal or departmental web
pages
- 3 had posted to an e-print server
- 11 had used an e-print server for research and/or teaching

They appeared to be generally well-informed of the problems
surrounding self-archiving:
- 62 expressed concern that if they posted a pre-print they
would not be able to get the work published in their chosen
journal
- 60 said that copyright issues would be an important
factor in any decision not to self-archive post-prints
- 59 were concerned about quality and peer review issues
- 60 were concerned about plagiarism

There seem to be several issues surrounding quality and
peer review, including the common misconception about
self-archiving being an alternative to self-publishing, and
impact factors (which have been discussed at length on this
list). Also, a number of staff have commented that they
would not wish to include their work in an archive which
contained non-refereed material (pre-prints), the
perception being that any inclusion of poor-quality papers
reflects badly on the whole Institution.

The concerns about plagiarism baffle me somewhat -
presumably it is just as easy to plagiarise an electronic
article on a toll-access publisher site as an e-print!  I
wonder if this is really about who will protect the
author's rights in the event of plagiarism from a pre-print
on an e-print server?

Regarding copyright, I was interested to note the Springer
Verlag copyright transfer form, which begins:
"The copyright to the contribution identified above is
transferred to Springer-Verlag ..(for U.S. government
employees: to the extent transferable)."
Presumably the U.S. government is retaining at
least some degree of copyright in work which is funded by
the taxpayer - does anyone have more information about
that? Is anyone (JISC? SCONUL?) lobbying the British
government and/or Universities to do likewise?

Linda

--
Linda Humphreys
Science Faculty Librarian
University of Bath
Claverton Down
Bath BA2 7AY
l.j.humphr...@bath.ac.uk
01225 385248


Re: UK Research Assessment Exercise (RAE) review

2002-11-30 Thread Linda Humphreys
On Fri, 29 Nov 2002 14:33:46 + Jan Velterop wrote:
>
> I'm not so sure that they  do understand the
> concepts and benefits of open access. That is simply
> because they haven't really been exposed to them. The
> librarians have been very good in making it seem to many scientists as if
> access to their desired journal titles is free and easy. The researchers
> don't feel the pain. To them, as readers, it may often seem as if large
> parts of the literature are open access.


At the University of Bath, academic staff are well aware of
the costs and barriers to access of traditional journals,
and I think the same would be true of most similar
institutions.  Budgets for journals have been tight for
years, and we librarians liaise very closely with academic
staff over cancellations, and the purchase of electronic
subscriptions.

We carried out a survey earlier this year of academic staff
views of e-prints.  Only 74 replied - perhaps that is par
for the course, or perhaps it backs up Jan's assertion that
they are largely ignorant of open access/e-print issues?
The majority of replies were from scientists.  You might be
interested in a few results?

Out of the 74 respondents:
- 11 had posted articles on personal or departmental web
pages
- 3 had posted to an e-print server
- 11 had used an e-print server for research and/or teaching

They appeared to be generally well-informed of the problems
surrounding self-archiving:
- 62 expressed concern that if they posted a pre-print they
would not be able to get the work published in their chosen
journal
- 60 said that copyright issues would be an important
factor in any decision not to self-archive post-prints
- 59 were concerned about quality and peer review issues
- 60 were concerned about plagiarism

There seem to be several issues surrounding quality and
peer review, including the common misconception about
self-archiving being an alternative to self-publishing, and
impact factors (which have been discussed at length on this
list). Also, a number of staff have commented that they
would not wish to include their work in an archive which
contained non-refereed material (pre-prints), the
perception being that any inclusion of poor-quality papers
reflects badly on the whole Institution.

The concerns about plagiarism baffle me somewhat -
presumably it is just as easy to plagiarise an electronic
article on a toll-access publisher site as an e-print!  I
wonder if this is really about who will protect the
author's rights in the event of plagiarism from a pre-print
on an e-print server?

Regarding copyright, I was interested to note the Springer
Verlag copyright transfer form, which begins:
"The copyright to the contribution identified above is
transferred to Springer-Verlag ..(for U.S. government
employees: to the extent transferable)."
Presumably the U.S. government is retaining at
least some degree of copyright in work which is funded by
the taxpayer - does anyone have more information about
that? Is anyone (JISC? SCONUL?) lobbying the British
government and/or Universities to do likewise?

Linda

--
Linda Humphreys
Science Faculty Librarian
University of Bath
Claverton Down
Bath BA2 7AY
l.j.humphr...@bath.ac.uk
01225 385248


Re: UK Research Assessment Exercise (RAE) review

2002-11-29 Thread Jan Velterop
On 28 November 2002 Barry Mahon  wrote:

> This whole argument (OA is better/cheaper/more efficient.etc and
> misunderstood) runs the risk of becoming like politics and religion
> as subjects for argument, ideology replace reality.  Despite all the
> hype, and noise scientists still seem to prefer the well
> known and well understood paths to publishing - at the moment.

In the way that children prefer chocolate and sweets over vegetables,
perhaps? I'm, however, convinced that is so, because they don't realise, or
at least not fully realise, the benefits of open access publishing.

> I would wager that they understand quite well the concepts and
> advantages/disadvantages of OA but so far they consider the tried and
> tested to be as good if not better. The quote above about circularity
> is one of the measures of this.

I'm not so sure that they do understand the concepts and benefits of open
access. That is simply because they haven't really been exposed to them. The
librarians have been very good in making it seem to many scientists as if
access to their desired journal titles is free and easy. The researchers
don't feel the pain. To them, as readers, it may often seem as if large
parts of the literature are open access. The conventional science publishing
industry is like the catfood industry: they don't sell to the consuners,
they sell to the carers, the ones with the wallets. Little wonder that
scientists are often not aware of the issues of serials crises and open
access solutions. If they were, many would be likely to take an attitude to
publishing their research that is similar to their attitude towards
scientific problems: experiment and 'push the envelope'. The theory and the
hypotheses are clear. And experimental results are now, slowly but steadily,
becoming available, such as a generally higher rate of citation for articles
that are freely accessible to anyone.

> One of the possible problems of OA is the lack of simple (i.e. easy
> to access/available off a shelf) sources with well known titles and
> an inherent quality perception.

I'm sure that is one of the problems, just as it is a problem that there
aren't any healthy vegetables that taste like chocolate. Should that be a
reason not to try and move forward and work on the creation of sources,
titles and quality perception of open access?

> The same is true of RAE, in a way, it is perhaps crude but it is simple
> and it fits the understanding of present publication patterns by those
> who advise the governement on such matters as RAE (we must not forget
> that these decisions are taken with the agreement of at least some of
> those who are so assessed).
>
> The newer ways of publishing have, like most new ideas, to overcome some
> 'not invented here' like reaction, some competitive jealousy from those
> economically affected and inertia. In addition OA has to prove that the
> writing will be seen by those who matter, including those performing RAE,
> and be easly to find when you are looking for citable material.

All true, and all the more reason to spread convincing arguments for open
access in order to overcome these hurdles.

> OA will become an accepted part of the research results dissemination
> process, it will be incorporated in whatever sorts of RAEs we will have
> and OA originated material will be identified and quoted like everything
> else. Do we have to agree that it will replace all the other methods?? In
> my opinion, no, we can discuss that as one scenario, if we wish, but
> let it not become the sine qua non of the discussions.

We don't have to agree beforehand that open access will replace conventional
publishing methods, as long as we can agree that there are clear, distinct
and worthwhile benefits in open access that are likely to contibute
materially to a significant increase of the pace of scientific discovery,
quite possibly at an appreciably lower aggregate cost than the conventional
system. Even then, open access may never replace conventional publishing
methods entirely (new methods never do: there is still a niche for
sailboats, horse and carriage, tophats, coattails, leather book covers, you
name it), but it deserves to be taken seriously and promoted for the sake of
scientific progress.

Jan Velterop


Re: UK Research Assessment Exercise (RAE) review

2002-11-29 Thread Stevan Harnad
On Thu, 28 Nov 2002, Jan Velterop wrote:

> The perception that I wanted to steer the discussion in the direction
> of peer-review reform is perhaps the reason why Stevan as moderator
> chose not to post my full contribution on the September98-list (fair
> enough, that's his prerogative) but only the bits to which he reacts

No, it was a mistake on my part! I think you posted it originally only to
BOAI, so I didn't re-post it to AmSci, but posted only my quote/comments.
I will go back now, and re-post it from the BOAI archives to AmSci.
I apologize..

> (I'll post the full contribution to the bmanifesto-list shortly, so
> that my open access friends can have a complete record of the
> discussion; the hiatuses are minor, but I just don't like censorship of
> any kind on discussion lists).

Jan, I take full responsibility for the times I invoke cloture, but I
always announce it openly. This was not such a case! It was an error.
(Did you post the original to AmSci too, or only to BOAI?)

> But my topic is not peer-review reform
> per se; the issue is and was the impediments that entrenched,
> traditional scientometric qualifyers are putting up for new open access
> journals. These impediments are presumably alright if one believes that
> open access to peer-reviewed literature is only ever realistically
> possible if articles published in entrenched, traditional journals are
> being mounted on open institutional or self-archives, but I don't, and
> I happen to know quite a few people who believe with me that there are
> other ways to the proverbial Rome as well, such as journals published
> with open access from the outset.

It is not my (undenied) preference for the BOAI-1 route (for the reasons
I have so often stated) that is responsible for the fact that any new
journal, regardless of medium or economic model, faces an uphill battle
until it manages to establish a track-record of some kind. "Track-record"
simply means reliable and persisting evidence (hence predictor)
of quality.

This fact about the need for a track-record is not (just) because of a
bias for "traditional" journals; nor is it because of the tyranny of the
ISI journal-impact factor (though it is certainly time for multivariate
scientometric flowers to be added to the orchard); nor is it in any way
because of my own support for BOAI-1!

The struggle to establish a credible track-record as soon as possible is
quite understandable for all new journals. They require that in order
to attract and sustain authors, readers, citations, and all the other
things a journal needs in order to have high quality and impact.

THe BMC journals are meeting this challenge admirably, and the Faculty
of 1000 reviews are a valuable factor in this.

But please, if BMC is still facing challenges, please don't attribute
them to my own efforts to promote a parallel path to open access (and
one that may eventually make things easier for open-access journals too,
if it succeeds).

>jv>Of course one can subsequently quantify such qualitative
>jv>information. But what a known and acknowledged authority thinks
>jv>of an article is to many more interesting than what anonymous
>jv>peer-reviewers think. Any research assessment exercise should
>jv>seriously look at resources such as offered by Faculty of 1000.
>
>sh> Let 1000 flowers bloom. But it's rather mis-stating the options to
>sh> describe them as open-review vs. anonymous-review! Classical peer
>sh> review is one thing. Then there is post-hoc open-review thereafter.
>
> Your mis-readings; not my mis-statements. Calling F1000 'secondary
> review', as I did,  is clearly implying that it is complementary, and
> not an alternative to conventional journal peer-review. The reason why
> any research assessment exercise should look at such secondary
> resources is that they offer a) a second opinion by a known reviewer,
> and b) an opinion on individual papers rather than average track
> records of journals in which those papers were published.

I stand corrected. I was misled by the fact that you were advocating
nonanonymous primary peer review too (but we agree that that is
still just classical peer review, and essential).

>jv>  "All BMC's medical have open peer review which works most
>jv>  satisfactorily."

Stevan Harnad


Re: UK Research Assessment Exercise (RAE) review

2002-11-29 Thread Stevan Harnad
On Thu, 28 Nov 2002, Barry Mahon wrote:

> This whole argument (OA is better/cheaper/more efficient.etc and
> misunderstood) runs the risk of becoming like politics and religion
> as subjects for argument, ideology replace reality.  Despite all the
> hype, and noise scientists still seem to prefer the well known and well
> understood paths to publishing - at the moment.

I am not quite sure why you describe the promotion of open access as
hype and noise. The Budapest Open Access Initiative
http://www.soros.org/openaccess/ and other active proponents of open
access are trying to help research and researchers; they are not trying
to sell a product.
http://www.ecs.soton.ac.uk/~harnad/Hypermail/Amsci/2212.html

Perhaps you are thinking about the promotion of open-access journals?
But surely journals have the right to promote themselves, regardless
of their cost-recovery model?

> I would wager that they understand quite well the concepts and
> advantages/disadvantages of OA but so far they consider the tried and
> tested to be as good if not better.

My own experience over a number of years of rather active proselytizing
for open access http://www.ecs.soton.ac.uk/~harnad/talks.htm suggests
that you would lose that wager. Researchers are remarkably,
breath-takingly under-informed and confused on this issue. That is why
the BOAI's main mission is not to invent the means of attaining open
access (the means exist already, have been adundantly tested, and work)
but to inform the research community about the causal connection between
research access and research impact, and then how to go about maximizing
it (through open access). I don't think you will find many researchers
who will say: "I know and understand the causal connection between
maximizing access and maximizing impact, and I am not interested in
(or opposed to) maximizing the impact of my work." Even less will you
find university employers of researchers or government funders of
research who would endorse such conclusions.

So it's a far better bet that the problem is under-informedness and
ill-informedness, rather than rational judgment. Indeed, why the
research community is so slow to realize and and act upon what is,
upon just a little reflection, optimal, reachable, and indeed inevitable
for research and researchers might better be dubbed a "koan" than an
instance of clear understanding!

Re: The "big koan"
http://www.ecs.soton.ac.uk/~harnad/Hypermail/Amsci/2053.html

Harnad, S. (1999) Free at Last: The Future of Peer-Reviewed Journals.
D-Lib Magazine 5(12) December 1999
http://cogprints.soton.ac.uk/documents/disk0/00/00/16/85/

Unfortunately, Barry, your own further comments indicate how widespread
the misunderstanding still is:

> One of the possible problems of OA is the lack of simple (i.e. easy
> to access/available off a shelf) sources with well known titles and
> an inherent quality perception.

Assuming that by OA you mean open access, it is rather hard to construe
what, exactly, you have in mind above. Open Access, as clearly stated,
for example, in the BOAI statement, means free online access to the
peer-reviewed research literature. Now what on earth can your statement
mean relative to that definition? There are currently 20,000
peer-reviewed journal titles, each with its own inherent quality, and
perceptions of it: 2,000,000 articles annually. Open access is about
making access to those 2,000,000 annual articles free online.

You speak of

"the lack of simple (i.e. easy to access/available off a shelf)
sources with well known titles"

How are we even to begin to construe this, so many are the internal
consistencies betraying a rather transparent nonunderstanding of what open
access is about -- the very same kinds of nonunderstanding that prevail
in the rest of the research community:

Lack of easy-to-access titles? But that's precisely the problem to which
open access is the proposed solution. Most titles are not easy to access:
They require toll-access.

"Off the shelf"? Apart from rather biassing the question toward paper
access, what can this possibly mean? A researcher has neither on-paper
nor on-line access to journals for which his institution cannot afford
the access-tolls, and that is what this is all about!

So unless you were suggesting that inaccessibility is evidence of the
absence of a need for accessibility here, I suspect that what you might
have had in mind is also the most common error in the research community's
construal of open access today: Open access is not an alternative to
peer-reviewed journals, it is an alternative means of accessing them.

Two such means have been proposed: BOAI-1 is the authors of those
2,000,000 articles providing open access to them by self-archiving
them, and BOAI-2 is the pubishers of those 20,000 journals providing
open-access to them by conversion to open-access or the creation of
new open-access journals.

If is this very last subset of the several parall

Re: UK Research Assessment Exercise (RAE) review

2002-11-29 Thread David Goodman
Jan, I want to comment on one point only:

On Thu, 28 Nov 2002, Jan Velterop wrote:
>
> Close analysis of the track record of many journals shows an enormous
> variability in rates of citation for the articles published in them. If
> my journal publishes mostly 'landfill' science, but I manage to
> persuade a few brilliant review articles (for instance by paying the
> review-author generously), I can secure a reasonable impact factor, the
> common measure of a journal's track record. This is not hypothesis, but
> widespread reality. Secondary evaluation brings this to the fore, and
> that's why secondary evaluation, in the manner of for instance Faculty
> of 1000, is so important.

The analysis of this, and the corrections for it, do not only rely on
non-quantitative reviews, but are  accessible to
current bibliometric techniques -- the help pages for JCR even explain the
technique for removing review articles from the count.

Another interesting measure, which ISI currently supplies only on a custom
basis, but can certainly be calculated independently by anyone willing to
do the work, is the proportion of articles with 0, 1, 2, ..., n
citations.

I also need to emphasize that I regard Jan's work highly, and agree that
human judgements are a useful check on quantitative techniques as well as
a supplement to them. And of course reviews are useful in their own
right--if I didn't think so I would not have been reviewing reference
books and databases for CHOICE for the last 15 years or so.

 Dr. David Goodman
Princeton University Library
and
Palmer School of Library & Information Science

dgood...@princeton.edu


Re: UK Research Assessment Exercise (RAE) review

2002-11-28 Thread David J. Solomon,. Ph.D.
At 02:13 PM 11/28/2002 +, Barry Mahon ICSTI wrote:

>This whole argument (OA is better/cheaper/more efficient.etc and
>misunderstood) runs the risk of becoming like politics and religion
>as subjects for argument, ideology replace reality.  Despite all the
>hype, and noise scientists still seem to prefer the well known and well
>understood paths to publishing - at the moment.
>
>I would wager that they understand quite well the concepts and
>advantages/disadvantages of OA but so far they consider the tried and
>tested to be as good if not better.

This has not been my experience; in fact, just the opposite. In my
experience most faculty are fairly ignorant of open archives, the serial
pricing crisis in scholarly publishing and the inherent problems of a "pay
for access" publishing model.  They are also pretty apathetic because they
are unaware of the problems and haven't through through the real potential
of electronic publication.

I am a medical educator and have given several talks on the topic and had
numerous discussions both among educational professionals and academic
physicians and the level of ignorance of these issues is always high. In my
experience once explained most faculty can see the problems with the
current system and the potential of electronic publication and open
archiving but as noted by Barry Mahon most remain skeptical and concerned
about quality, and maintenance issues of moving to an open access system.
The only group seems to be well informed and concerned not surprisingly are
the research librarians.

Maybe I'm pessimistic but I can't help thinking Max Planck what right.

"New scientific truth does not triumph by convincing its opponents and
making them see the light, but rather because its opponents eventually
die, and a new generation grows up that is familiar with it."

Max Planck, "Scientific Autobiography and Other Papers", Williams &
Norgate, London (1950), pages 33-34.

David Solomon, Ph.D.
A-202 E. Fee Hall
MSU
E. Lansing, Mi 48824

(517) 353-2037 Voice
(517) 432-1798
dsolo...@msu.edu


Re: UK Research Assessment Exercise (RAE) review

2002-11-28 Thread Jan Velterop
On Wednesday, November 27, 2002, at 01:06 PM, Stevan Harnad wrote:

> On Wed, 27 Nov 2002, Jan Velterop wrote:
>
>> I meant to give an example of a complement to quantification.
>
> Signed open secondary reviews are certainly a complement to both
> scientometric measures and primary (peer) reviews. All direct human
> judgments are. But they are also countable, content-analyzable, comparable
> against other data, including the track-record of the reviewer's name,
> hence amenable to scientometrics.

I never claimed that that wasn't the case.

> By the way, primary peer reviews are not usually signed by the  referees'
> names, but they are always signed by the journal-name. Hence the journal
> and its editor are openly accountable for the quality of the papers it
> accepts (and, indirectly, for those it rejects too!). That is why the
> journal-name and track-record are such important indicators, both for
> scientometric assessment and for navigation by the would-be user trying
> to decide what is worth reading and safe to try to build upon.

Close analysis of the track record of many journals shows an enormous
variability in rates of citation for the articles published in them. If
my journal publishes mostly 'landfill' science, but I manage to
persuade a few brilliant review articles (for instance by paying the
review-author generously), I can secure a reasonable impact factor, the
common measure of a journal's track record. This is not hypothesis, but
widespread reality. Secondary evaluation brings this to the fore, and
that's why secondary evaluation, in the manner of for instance Faculty
of 1000, is so important.

>> Much of the trouble is not quantification per se, but the lack of
>> information to enable weighting the votes.
>
> To a great extent scientometrics is about finding the proper weightings
> for those votes!
>
>> The journals (well, at least some of them) lend a certain weight to
>> their peer-review, but this peer-review is almost always anonymous.
>
> Journal quality varies, both within journals (owing to human
> fallibility) and between journals (owing to systematic differences in
> peer-review standards and hence quality). The journal, however, is never
> anonymous. Its reputation is answerable to the degree to which it
> improves article quality through peer review, and the quality
> selectivity it exercises.

This sounds like an 'ideal market' argument. Ideal markets don't exist
either.

> I will not rehearse here the long, old list of arguments for and  against
> referee anonymity. The primary argument against referee anonymity is
> answerability (to ensure qualifications, minimize bias, etc.). The
> primary argument for anonymity is freedom (to exercise judgment without
> risk of counter-bias, e.g., when a junior researcher is reviewing the
> work of a senior researcher). Referee anonymity is normally offered as
> an option which some referees choose to exercise and some do not,
> depending on the referee and the circumstances. But the real protection
> against bias is supposed to be the editor (to whom the referee certainly
> is not anonymous) and the reputation of the journal. A biassed choiceof
> referees will generate biassed referee reports and biassed journal
> contents. That is a matter of public record. The remedy is either to
> replace the editor or to switch to a rival journal.
>
> But this is all on the topic of peer review reform, which is not the
> focus of this Forum. This Forum is concerned with freeing the current
> peer-reviewed research literature (20,000 peer-reviewed journals) from
> access-tolls, not about freeing it from, or modifying, peer review.

The perception that I wanted to steer the discussion in the direction
of peer-review reform is perhaps the reason why Stevan as moderator
chose not to post my full contribution on the September98-list (fair
enough, that's his prerogative) but only the bits to which he reacts
(I'll post the full contribution to the bmanifesto-list shortly, so
that my open access friends can have a complete record of the
discussion; the hiatuses are minor, but I just don't like censorship of
any kind on discussion lists). But my topic is not peer-review reform
per se; the issue is and was the impediments that entrenched,
traditional scientometric qualifyers are putting up for new open access
journals. These impediments are presumably alright if one believes that
open access to peer-reviewed literature is only ever realistically
possible if articles published in entrenched, traditional journals are
being mounted on open institutional or self-archives, but I don't, and
I happen to know quite a few people who believe with me that there are
other ways to the proverbial Rome as well, such as journals published
with open access from the outset.

> That second agenda will first require some empirical testing and comparison,
> which has not yet been done, to my knowledge. To put it another way:
> the alternative to toll-access, namely, open-acce

Re: UK Research Assessment Exercise (RAE) review

2002-11-28 Thread Barry Mahon
>Date:Tue, 26 Nov 2002 08:39:22 +0100
>From:informa...@supanet.com
> Re: The circularity Stevan refers to is "You cannot cite what you
>haven't read, you tend not to read what is not stocked in your library (or
>readily avaialble online), and your library tends not to stock what isn't
>cited".

and

> Re: "Just as the widespread *perception* that self-archiving is basically
> self-publishing, or otherwise dangerously close to breaking copyright
> law, is hampering progress with institutional repositories"

and

> Re: "Indeed. So forget about relying on your library (and the access
>tolls it may or may not be able to afford) and make your research openly
>accessible for free for all by self-archiving it. And if you are in a
>developing country and you need it, help in doing this is available
>from the Soros Foundation's Budapest Open Access Initiative:
>http://www.soros.org/openaccess/";

This whole argument (OA is better/cheaper/more efficient.etc and
misunderstood) runs the risk of becoming like politics and religion
as subjects for argument, ideology replace reality.  Despite all the
hype, and noise scientists still seem to prefer the well known and well
understood paths to publishing - at the moment.

I would wager that they understand quite well the concepts and
advantages/disadvantages of OA but so far they consider the tried and
tested to be as good if not better. The quote above about circularity
is one of the measures of this.

One of the possible problems of OA is the lack of simple (i.e. easy
to access/available off a shelf) sources with well known titles and
an inherent quality perception.

The same is true of RAE, in a way, it is perhaps crude but it is simple
and it fits the understanding of present publication patterns by those
who advise the governement on such matters as RAE (we must not forget
that these decisions are taken with the agreement of at least some of
those who are so assessed).

The newer ways of publishing have, like most new ideas, to overcome some
'not invented here' like reaction, some competitive jealousy from those
economically affected and inertia. In addition OA has to prove that the
writing will be seen by those who matter, including those performing RAE,
and be easly to find when you are looking for citable material.

OA will become an accepted part of the research results dissemination
process, it will be incorporated in whatever sorts of RAEs we will have
and OA originated material will be identified and quoted like everything
else. Do we have to agree that it will replace all the other methods?? In
my opinion, no, we can discuss that as one scenario, if we wish, but
let it not become the sine qua non of the discussions.

Barry Mahon ICSTI


Re: UK Research Assessment Exercise (RAE) review

2002-11-28 Thread David Goodman
The relatively trivial thing to see is  to what extent does it
predict short term and long term use,
as measured by standard techniques. (there is obviously a circularity
problem here: the very fact of inclusion in F1000 will increase use)

The practical value of F1000 is that if one trusts the reviewer, one can
use that persons' guidance. The basic problem is the same
as with book reviews--to
what extent does one trust that reviewer? This is easy to decide for a
single individual: does the reviewer think important the same things I
do? This is not as applicable for a professional field as a whole,
Further, it is very difficult to state in
objective and quantifiable terms.

Actually, based on most F1000 reviews I've seen, the reviewers tend to
emphasize immediate interest rather than long term value, and to do so
deliberately. This may well be a good policy: they are reviewing what one
should read now. And in that sense it offers another dimension. The only
easy way to check its validity for this purpose I know of is
inter-reviewer consistency. Have you mean any measurements of correlations
between your reviewers?  The more difficult  way, is consistency with the
judgements of the readers. This has traditionally been measured in the
publishing field by the number of subscribers/purchasers etc. Do you have
any use figures?  In particular, do those people who try it keep using it?

2002, Jan Velterop wrote:

> David,
>
> I'm not sure that 'accuracy' is a relevant notion in relation to Faculty of
> 1000. The faculty-members offer their opinions on papers they deem of
> interest. I quote from a response I sent earlier to one of Stevan Harnad's
> contributions to this list: The point of Faculty of 1000 is that an open,
> secondary review of published literature by acknowledged leaders in the
> field, signed by the reviewer, is seen by increasing numbers of researchers
> (measured by the fast-growing usage figures of F1000) as a very meaningful
> addition to quantitative data and a way to sort and rank articles in order
> of importance. Of course one can subsequently quantify such qualitative
> information. But what a known and acknowledged authority thinks of an
> article is to many more interesting than what anonymous peer-reviewers
> think.
>
> What would you have in mind with regard to accuracy in this regard?
>
> Jan Velterop
>
> > -Original Message-
> > From: David Goodman [mailto:dgood...@princeton.edu]
> > Sent: 26 November 2002 19:36
> > To: american-scientist-open-access-fo...@listserver.sigmaxi.org
> > Subject: Re: UK Research Assessment Exercise (RAE) review
> >
> >
> > Jan, do you have any data demonstrating the accuracy of the
> > evaluations in faculty of 1000?
> >
> > Dr. David Goodman
> > Princeton University Library
> > and
> > Palmer School of Library & Information Science, Long Island University
> > dgood...@princeton.edu
>

Dr. David Goodman
Biological Sciences Bibliographer
Princeton University Library
dgood...@princeton.edu


Re: UK Research Assessment Exercise (RAE) review

2002-11-27 Thread Stevan Harnad
On Wed, 27 Nov 2002, Jan Velterop wrote:

> I meant to give an example of a complement to quantification.

Signed open secondary reviews are certainly a complement to both
scientometric measures and primary (peer) reviews. All direct human
judgments are. But they are also countable, content-analyzable, comparable
against other data, including the track-record of the reviewer's name,
hence amenable to scientometrics.

By the way, primary peer reviews are not usually signed by the referees'
names, but they are always signed by the journal-name. Hence the journal
and its editor are openly accountable for the quality of the papers it
accepts (and, indirectly, for those it rejects too!). That is why the
journal-name and track-record are such important indicators, both for
scientometric assessment and for navigation by the would-be user trying
to decide what is worth reading and safe to try to build upon.

> Much of the trouble is not quantification per se, but the lack of
> information to enable weighting the votes.

To a great extent scientometrics is about finding the proper weightings
for those votes!

> The journals (well, at least some of them) lend a certain weight to
> their peer-review, but this peer-review is almost always anonymous.

Journal quality varies, both within journals (owing to human
fallibility) and between journals (owing to systematic differences in
peer-review standards and hence quality). The journal, however, is never
anonymous. Its reputation is answerable to the degree to which it
improves article quality through peer review, and the quality
selectivity it exercises.

I will not rehearse here the long, old list of arguments for and against
referee anonymity. The primary argument against referee anonymity is
answerability (to ensure qualifications, minimize bias, etc.). The
primary argument for anonymity is freedom (to exercise judgment without
risk of counter-bias, e.g., when a junior researcher is reviewing the
work of a senior researcher). Referee anonymity is normally offered as
an option which some referees choose to exercise and some do not,
depending on the referee and the circumstances. But the real protection
against bias is supposed to be the editor (to whom the referee certainly
is not anonymous) and the reputation of the journal. A biassed choice of
referees will generate biassed referee reports and biassed journal
contents. That is a matter of public record. The remedy is either to
replace the editor or to switch to a rival journal.

But this is all on the topic of peer review reform, which is not the
focus of this Forum. This Forum is concerned with freeing the current
peer-reviewed research literature (20,000 peer-reviewed journals) from
access-tolls, not about freeing it from, or modifying, peer review. That
second agenda will first require some empirical testing and comparison,
which has not yet been done, to my knowledge. To put it another way:
the alternative to toll-access, namely, open-access, has been tried,
tested, shown to work, and shown to be far more beneficial to research
and researchers. The alternatives to peer-review have not (yet) been.

The present RAE assessment/impact thread is about ways to accelerate the
transition to open access by SUPPLEMENTING classical peer review with
rich new scientometric measures of impact that are co-evolving with an
open-access database. It is not about substitutes or reforms for classical
peer review. Those are another (worthy) matter, for another forum.

"Peer Review Reform Hypothesis-Testing"
http://www.ecs.soton.ac.uk/~harnad/Hypermail/Amsci/0479.html

> Reviewers may not even be proper 'peers' in some cases.

Yes, occasionally some conscientious editors err in their choice of
referees, or in their evaluation of their reports. Some human error is
inevitable (even by the most peerless of peers), but one hopes that when
the error is systematic (i.e., bias or incompetence) the open,
answerable dimension of the system -- namely, the journal's and editor's
names and reputations -- will help expose, control and correct such errors.

> Stevan speculates that "Perhaps reviewer-names could accrue some
> objective scientometric weight...". I would perhaps remove the 'perhaps'.

Note that I was speaking of secondary, open reviewers, in review journals
or in open peer commentary or in ratings, all appearing after the article
has been published. Those are all valuable supplements to the current
system. But I was certainly not recommending abandoning the option
of referee anonymity  in primary peer review (until the logic and
empirical consequences of such a change are analyzed and tested
thoroughly) -- although untested recommendations along those lines have
been made by others (including some bearing the same surname as myself!
http://www.ecs.soton.ac.uk/~harnad/Hypermail/Amsci/0303.html ).

> Maybe it has its own set of problems, but disclosing the peers' identity may
> be a great help in assessing the weight or signific

Re: UK Research Assessment Exercise (RAE) review

2002-11-27 Thread Jan Velterop
David,

I'm not sure that 'accuracy' is a relevant notion in relation to Faculty of
1000. The faculty-members offer their opinions on papers they deem of
interest. I quote from a response I sent earlier to one of Stevan Harnad's
contributions to this list: The point of Faculty of 1000 is that an open,
secondary review of published literature by acknowledged leaders in the
field, signed by the reviewer, is seen by increasing numbers of researchers
(measured by the fast-growing usage figures of F1000) as a very meaningful
addition to quantitative data and a way to sort and rank articles in order
of importance. Of course one can subsequently quantify such qualitative
information. But what a known and acknowledged authority thinks of an
article is to many more interesting than what anonymous peer-reviewers
think.

What would you have in mind with regard to accuracy in this regard?

Jan Velterop

> -Original Message-
> From: David Goodman [mailto:dgood...@princeton.edu]
> Sent: 26 November 2002 19:36
> To: american-scientist-open-access-fo...@listserver.sigmaxi.org
> Subject: Re: UK Research Assessment Exercise (RAE) review
>
>
> Jan, do you have any data demonstrating the accuracy of the
> evaluations in faculty of 1000?
>
> Dr. David Goodman
> Princeton University Library
> and
> Palmer School of Library & Information Science, Long Island University
> dgood...@princeton.edu


Re: UK Research Assessment Exercise (RAE) review

2002-11-27 Thread Sinisa Maricic
The outer (non-ISI) circle:

Hope the [linked] review paper of mine is not (already) superfluous. The
bottom line is, of course, (Stevan's) OAI.
http://dlist.sir.arizona.edu/archive/0087/

Sini_a Mari?i?
HR-1 Zagreb, Polji?ka 12/D-419, Croatia
smari...@rocketmail.com


Re: UK Research Assessment Exercise (RAE) review

2002-11-27 Thread informania
Tim,

Thanks for the very interesting and suggestive data.

At the risk of crossing threads, I wonder if this citation pattern is
peculiar to physics or, even more narrowly, to arXiv? As arXiv grows to
encompass more/all of physics, the volume of submissions should make the
"inbox of research" approach unfeasible. I can't see this ever being
possible in the biomedical field - not even now.

In contemporary medicine, the watchword is "evidence" - practice should
follow the best evidence, seen as a collection of research testimonials to
the validity of a particular course of action, or of controlled clinical
trials, rather than a single "high impact" paper. This evidence may go back
in time some way (there has been much trawling through the back-catalogue
for HIV/AIDS case reports, for example, as a number of these pre-date the
definition of HIV/AIDS and offer clues to the genesis of the syndrome). The
reading and citation results in the field are thus likely to be strongly
influenced by this pattern of research and reading.

It would be interesting to see empirical data such as you have gathered for
arXiv/physics from other fields.

Chris

Chris Zielinski
Director, Information Waystations and Staging Posts Network
Currently External Relations Officer, HTP/WHO
Avenue Appia, CH-1211, Geneva, Switzerland
Tel: 004122-7914435 Mobile: 0044797-10-45354
e-mail: zielins...@who.int and informa...@supanet.com
web site: http://www.iwsp.org

- Original Message -
From: "Tim Brody" 
To: 
Sent: Tuesday, November 26, 2002 7:23 PM
Subject: Re: UK Research Assessment Exercise (RAE) review


> Chris Zielinski asks:
>
> > how many articles have been read but not cited?
>
> The folloowing estimates are from Citebase's database
> (http://citebase.eprints.org/) -
>
> (but duly noting caveats on data-quality, scope, coverage, noisiness,
> potential for abuse etc, http://citebase.eprints.org/help/coverage.php
> http://citebase.eprints.org/help/#impactwarning )
>
> Looking at the 91,017 arXiv.org articles that have a "journal reference"
> (the author has said where the article was/will be published)
>
> 17628 (19.4%) have not been cited but have at least once been downloaded
> from uk.arXiv.org
>
> (of the remainder 73265 have both been cited and downloaded, 98 have been
> cited but not downloaded, and 26 were neither cited or downloaded)
>
> I believe this is because physicists read all the new additions to the
> arXiv.org, as it forms a convenient "inbox" of research. However, over
time
> downloads are more discerning between low impact and high impact (pink
line
> is the top quartile of papers by citation impact):
> http://citebase.eprints.org/analysis/hitslatencybyquartile.png
>
> Correlation r between "hits" and citation impact for the top quartile is
> 0.3359 with an n of 25,532.
>
> Citations and downloads are mutually re-inforcing. If an author has read
an
> article they are more likely to cite it, conversely if an author sees a
> citation they are likely to read the article that has been cited.
>
> All the best,
> Tim.
>
> - Original Message -
> From: 
> To: 
> Sent: Tuesday, November 26, 2002 7:39 AM
>
> > In fact, Stevan mentions "other new online scientometric measures such
as
> > online usage ["hits"], time-series analyses, co-citation analyses and
> > full-text-based semantic co-analyses, all placed in a weighted multiple
> > regression equation instead of just a univariate correlation". Indeed,
> > impact factors are very crude quasi-scientometric and subjective
measures
> > compared even with such simple information (easy to obtain for online
media)
> > as counts of usage - for example, how many articles have been read but
not
> > cited?
> >
> > All these are indeed worth pursuing and, I would have thought, right on
the
> > agenda of the OA movement.
> >
> > Chris Zielinski
> > Director, Information Waystations and Staging Posts Network
> > Currently External Relations Officer, HTP/WHO
> > Avenue Appia, CH-1211, Geneva, Switzerland
> > Tel: 004122-7914435 Mobile: 0044797-10-45354
> > e-mail: zielins...@who.int and informa...@supanet.com
> > web site: http://www.iwsp.org


Re: UK Research Assessment Exercise (RAE) review

2002-11-27 Thread informania
Stevan,

Thanks for the clarification of your definitions and position on impact
measurements (in particular, for "The ISI journal-impact factor's vicious
circularity is part of the toll-access circle, which is precisely what
open-access and self-archiving are designed to free us from!"). I am glad to
see that you endorse a far broader scientometric system than is offered by
ISI impact factors. Much of our discussion now seems to run along parallel
tracks.

This is to comment on a couple of your other points:

> Anything-metric simply means "measured." I
> assume that if we want research to have an impact, we'd also like to be
> able to measure that impact. Peer reviewers are the first line of defense,
> using direct human judgment and expertise as to the quality and hence
> the potential impact of research. But I assume we don't want
> to make full peer-review our only impact metric, and to just keep on
> repeating it (e.g., by the RAE assessors). (It takes long enough to get
> a paper refereed once!) So what are the other alternatives? And are any
> of them "non-scientometric"? If they are objective, quantified measures,
> I can't see that there is any other choice!

Scientometrics is the measurement of science ("quantitative aspects of the
science of science"). I was objecting to the implication that the
scientometrics of impact factors was a scientific discipline in the
Popperian falsifiable sense, when there is much that is subjective buried in
the ISI impact factor (as I attempted to illustrate in suggesting that there
are all kinds of subjective reasons why researchers chose to cite one
journal rather than another). What you are measuring with some objectivity
is the behaviour of citers rather than the quality of the literature. Hence
my later use of the word "quasi-scientometric" in this regard.

> As to the reason why a researcher might cite a paper in the British
Medical
> Journal rather than the Bhutan Medical Journal: There are many reasons
> (and lets admit that sometimes some of them really do have to do with
> quality differences, if only because the Bhutan researchers, far more
> deprived of research-access than British ones, are unable to do more
> fully-informed research as a consequence).

In common with many developing country medical journals, the (fictional)
Bhutan Medical Journal would focus on public health systems and health
services research, rather than basic medical research (which would be
difficult, as you say) and as such would be rather more worthy of citation
in this domain than any northern journal. I have a catalogue of quotations
illustrating the fact that there is plenty of material in developing country
journals which is of unique value and importance, but which is being
neglected through a lack of access. There are a number of projects aiming to
open up access to this literature online (including the ExtraMED project I
am developing with BioMed Central, as well as SciELO, Bioline, African
Journals Online...).

Chris Zielinski
Director, Information Waystations and Staging Posts Network
Currently External Relations Officer, HTP/WHO
Avenue Appia, CH-1211, Geneva, Switzerland
Tel: 004122-7914435 Mobile: 0044797-10-45354
e-mail: zielins...@who.int and informa...@supanet.com
web site: http://www.iwsp.org


Re: UK Research Assessment Exercise (RAE) review

2002-11-27 Thread Jan Velterop
The semantic whip "what is scientometrics?" may lash, but doesn't quite
crack, in my opinion. If Stevan says "I don't think that in reminding us
[...], Jan is not giving us an alternative to scientometric
quantification.", does that mean that he *does* think I *do*?

Good. I didn't even mean to.

I meant to give an example of a complement to quantification.

Much of the trouble is not quantification per se, but the lack of
information to enable weighting the votes. The journals (well, at least some
of them) lend a certain weight to their peer-review, but this peer-review is
almost always anonymous. Reviewers may not even be proper 'peers' in some
cases. Stevan speculates that "Perhaps reviewer-names could accrue some
objective scientometric weight...". I would perhaps remove the 'perhaps'.
Maybe it has its own set of problems, but disclosing the peers' identity may
be a great help in assessing the weight or significance of the review.
Besides, it may disclose possible conflicts of interest. All BMC's medical
journals have open peer review which works most satisfactorily. All journals
also have a comments section enabling a public, open discussion.

The point of Faculty of 1000 is that an open, secondary review of published
literature by acknowledged leaders in the field, signed by the reviewer, is
seen by increasing numbers of researchers (measured by the fast-growing
usage figures of F1000) as a very meaningful addition to quantitative data
and a way to sort and rank articles in order of importance. Of course one
can subsequently quantify such qualitative information. But what a known and
acknowledged authority thinks of an article is to many more interesting than
what anonymous peer-reviewers think. Any research assessment exercise should
seriously look at resources such as offered by Faculty of 1000.



Re: UK Research Assessment Exercise (RAE) review

2002-11-26 Thread David Goodman
Jan, do you have any data demonstrating the accuracy of the evaluations in 
faculty of 1000?

Dr. David Goodman
Princeton University Library
and
Palmer School of Library & Information Science, Long Island University
dgood...@princeton.edu

- Original Message -
From: Jan Velterop 
List-Post: goal@eprints.org
List-Post: goal@eprints.org
Date: Tuesday, November 26, 2002 1:33 pm
Subject: Re: UK Research Assessment Exercise (RAE) review

> As Einstein said, "Not everything that can be counted, counts; and not
> everything that counts, can be counted."
>
> Scientometrics and other metrics are about counting what can be
> counted.No-doubt the actions of citing, using, browsing, teaching,
> et cetera,
> are real ones that can be counted and thus are 'objective'. So
> 'quantity'is dealt with. What about 'quality'? Quality is
> relative, and based on
> judgement. The (micro-)judgements that lead to citing, browsing,
> awardingNobel prizes (OK, not so micro), et cetera, are utterly
> subjective,so what we count is 'votes'. Does more votes mean a
> higher 'quality'
> than fewer votes? Does it matter who does the voting?
>
> I think it does, at least in these matters, and therefore a review
> processis needed that ranks things like originality, fundamental
> new insights,
> and yes, contributions to wider dissemination and understanding as
> well,in order to base important decisions on more than just quasi-
> objectivemeasurements.
>
> Fortunately, in biology such secondary review is beginning to take
> shape:Faculty of 1000 (www.facultyof1000.com). It often shows that
> the subjective
> importance of articles is often unconnected, or only very loosely
> connected,to established scientometrics. It constantly brings up
> 'hidden jewels',
> articles in pretty obscure journals that are nonetheless highly
> interestingor significant.
>
> I am sure that automated, more inclusive, counting of votes made
> possible by
> open and OAI-compliant online journals and repositories will help the
> visibility of those currently outside the ISI Impact Factory
> universe, such
> as the journals from Bhutan. But it can't replace judgement.
>
> Jan Velterop
>
> > -Original Message-
> > From: Stevan Harnad [har...@ecs.soton.ac.uk]
> > Sent: 26 November 2002 15:16
> > To: american-scientist-open-access-fo...@listserver.sigmaxi.org
> >
> > For the sake of communication and moving ahead, I would like to
> clarify> two points of definition (and methodology, and logic)
> about the terms
> > "research impact" and "scientometric measures":
> >
> > "Research impact" means the measurable effects of research,
> including> everything in the following range of measurable effects:
> >
> > (1) browsed
> > (2) read
> > (3) taught
> > (4) cited
> > (5) co-cited by authoritative sources
> > (6) used in other research
> > (7) applied in practical applications
> > (8) awarded the Nobel Prize
> >
> > All of these (and probably more) are objectively measurable
> indices of
> > research impact. Research impact is not, and never has been just
> (4),> i.e., not just citation counts, whether average journal
> citation ratios
> > (the ISI "journal impact factor") or individual paper total or
> annual> citation counts, or individual author total or average or
> annual> citation counts (though citations are certainly important,
> in this
> > family of impact measures).
> >
> > So when I speak of the multiple regression equation measuring
> research> impact I mean all of the above (at the very least).
> >
> > "Scientometric measures" are the above measures. Scientometric
> analyses> also include time-series analyses, looking for time-
> based patterns in
> > the individual curves and the interrelations among measures like the
> > above ones -- and much more, to be discovered and designed as the
> > scientometric database consisting of the full text papers, their
> > reference list and their raw data become available for
> > analysis online.
>


Re: UK Research Assessment Exercise (RAE) review

2002-11-26 Thread Stevan Harnad
On Tue, 26 Nov 2002, Jan Velterop wrote:

> Scientometrics and other metrics are about counting what can be
> counted... So 'quantity' is dealt with. What about 'quality'?
> Quality is relative, and based on judgement...  utterly subjective,
> so what we count is 'votes'. Do more votes mean a higher 'quality'
> than fewer votes? Does it matter who does the voting?

All good scientometric questions, it seems to me (even the one about
how to identify and weight voting "authorities"). How to answer, if
not scientometrically? (Or do you think it should just be a matter of
individual opinion or taste?)

> I think it [matters who does the voting], at least in these matters,
> and therefore a review process is needed that ranks things like
> originality, fundamental new insights, and yes, contributions to
> wider dissemination and understanding as well, in order to base
> important decisions on more than just quasi-objective measurements.

Is this not among the things peer review is supposed to do? These are
almost literally the questions that appear in many referee evaluation
forms. Are you proposing a second round of review, a few years after
a paper appears? By all means, if you have the time and resources. And
certainly the RAE should include such secondary review data in its
scientometric equation too, if they are available in time.

But in what way is any of this an alternative to the quantitative,
scientometric assessment of research quality and impact? The only ones who
are not doing it scientometrically are the reviewers themselves (whether
in the primary peer review or in the second one Jan recommends). But their
judgments are just votes (i.e., scientometric data) too, just as the
journal-names are, in 1st-round peer review. Perhaps reviewer-names
could accrue some objective scientometric weight too, for the second
round.

But this is all speculation about what the future scientometric analyses
will yield, once we have these (open access) data available to do all
these analyses on.

For the RAE, unless Jan is recommending that the assessors do a 3rd round
of direct review of all their submission themselves, scientometrics
(yes, counting!) seems to be the only way they can do their ranking
(which is likewise counting).

> Fortunately, in biology such secondary review is beginning to take shape:
> Faculty of 1000 (www.facultyof1000.com). It shows that the subjective
> importance of articles is often unconnected, or only very loosely connected,
> to established scientometrics. It constantly brings up 'hidden jewels',
> articles in pretty obscure journals that are nonetheless highly interesting
> or significant.

I would certainly want to use Faculty of 1000 ratings and citations in
my multiple regression equation for impact, perhaps even giving them a
special weight (if analysis shows they earn it!). But what is the point?
This is just a further source of scientometric data!

> I am sure that automated, more inclusive, counting of votes made possible by
> open and OAI-compliant online journals and repositories will help the
> visibility of those currently outside the ISI Impact Factory universe, such
> as the journals from Bhutan. But it can't replace judgement.

No, it can't replace judgment. Like all other analyses, it can merely
quantify the outcomes of judgments, and weigh them, against one another
and against other measures. What else is there? Even the decision to
browse, read, and cite is just a set of human judgments we are counting
and trying to use to predict with. Predict what? Later human performance,
and findings, and judgments, i.e., research impact.

I don't think that in reminding us that all of this is based on human
judgment (and, of course, on empirical reality, in the case of science),
Jan is not giving us an alternative to scientometric quantification. He
is just reminding us of what it is that we are quantifying!

Stevan Harnad


Re: UK Research Assessment Exercise (RAE) review

2002-11-26 Thread Jan Velterop
As Einstein said, "Not everything that can be counted, counts; and not
everything that counts, can be counted."

Scientometrics and other metrics are about counting what can be counted.
No-doubt the actions of citing, using, browsing, teaching, et cetera,
are real ones that can be counted and thus are 'objective'. So 'quantity'
is dealt with. What about 'quality'? Quality is relative, and based on
judgement. The (micro-)judgements that lead to citing, browsing, awarding
Nobel prizes (OK, not so micro), et cetera, are utterly subjective,
so what we count is 'votes'. Does more votes mean a higher 'quality'
than fewer votes? Does it matter who does the voting?

I think it does, at least in these matters, and therefore a review process
is needed that ranks things like originality, fundamental new insights,
and yes, contributions to wider dissemination and understanding as well,
in order to base important decisions on more than just quasi-objective
measurements.

Fortunately, in biology such secondary review is beginning to take shape:
Faculty of 1000 (www.facultyof1000.com). It often shows that the subjective
importance of articles is often unconnected, or only very loosely connected,
to established scientometrics. It constantly brings up 'hidden jewels',
articles in pretty obscure journals that are nonetheless highly interesting
or significant.

I am sure that automated, more inclusive, counting of votes made possible by
open and OAI-compliant online journals and repositories will help the
visibility of those currently outside the ISI Impact Factory universe, such
as the journals from Bhutan. But it can't replace judgement.

Jan Velterop

> -Original Message-
> From: Stevan Harnad [mailto:har...@ecs.soton.ac.uk]
> Sent: 26 November 2002 15:16
> To: american-scientist-open-access-fo...@listserver.sigmaxi.org
>
> For the sake of communication and moving ahead, I would like to clarify
> two points of definition (and methodology, and logic) about the terms
> "research impact" and "scientometric measures":
>
> "Research impact" means the measurable effects of research, including
> everything in the following range of measurable effects:
>
> (1) browsed
> (2) read
> (3) taught
> (4) cited
> (5) co-cited by authoritative sources
> (6) used in other research
> (7) applied in practical applications
> (8) awarded the Nobel Prize
>
> All of these (and probably more) are objectively measurable indices of
> research impact. Research impact is not, and never has been just (4),
> i.e., not just citation counts, whether average journal citation ratios
> (the ISI "journal impact factor") or individual paper total or annual
> citation counts, or individual author total or average or annual
> citation counts (though citations are certainly important, in this
> family of impact measures).
>
> So when I speak of the multiple regression equation measuring research
> impact I mean all of the above (at the very least).
>
> "Scientometric measures" are the above measures. Scientometric analyses
> also include time-series analyses, looking for time-based patterns in
> the individual curves and the interrelations among measures like the
> above ones -- and much more, to be discovered and designed as the
> scientometric database consisting of the full text papers, their
> reference list and their raw data become available for
> analysis online.


Re: UK Research Assessment Exercise (RAE) review

2002-11-26 Thread Tim Brody
Chris Zielinski asks:

> how many articles have been read but not cited?

The folloowing estimates are from Citebase's database
(http://citebase.eprints.org/) -

(but duly noting caveats on data-quality, scope, coverage, noisiness,
potential for abuse etc, http://citebase.eprints.org/help/coverage.php
http://citebase.eprints.org/help/#impactwarning )

Looking at the 91,017 arXiv.org articles that have a "journal reference"
(the author has said where the article was/will be published)

17628 (19.4%) have not been cited but have at least once been downloaded
from uk.arXiv.org

(of the remainder 73265 have both been cited and downloaded, 98 have been
cited but not downloaded, and 26 were neither cited or downloaded)

I believe this is because physicists read all the new additions to the
arXiv.org, as it forms a convenient "inbox" of research. However, over time
downloads are more discerning between low impact and high impact (pink line
is the top quartile of papers by citation impact):
http://citebase.eprints.org/analysis/hitslatencybyquartile.png

Correlation r between "hits" and citation impact for the top quartile is
0.3359 with an n of 25,532.

Citations and downloads are mutually re-inforcing. If an author has read an
article they are more likely to cite it, conversely if an author sees a
citation they are likely to read the article that has been cited.

All the best,
Tim.

- Original Message -
From: 
To: 
Sent: Tuesday, November 26, 2002 7:39 AM

> In fact, Stevan mentions "other new online scientometric measures such as
> online usage ["hits"], time-series analyses, co-citation analyses and
> full-text-based semantic co-analyses, all placed in a weighted multiple
> regression equation instead of just a univariate correlation". Indeed,
> impact factors are very crude quasi-scientometric and subjective measures
> compared even with such simple information (easy to obtain for online media)
> as counts of usage - for example, how many articles have been read but not
> cited?
>
> All these are indeed worth pursuing and, I would have thought, right on the
> agenda of the OA movement.
>
> Chris Zielinski
> Director, Information Waystations and Staging Posts Network
> Currently External Relations Officer, HTP/WHO
> Avenue Appia, CH-1211, Geneva, Switzerland
> Tel: 004122-7914435 Mobile: 0044797-10-45354
> e-mail: zielins...@who.int and informa...@supanet.com
> web site: http://www.iwsp.org


Re: UK Research Assessment Exercise (RAE) review

2002-11-26 Thread Stevan Harnad
For the sake of communication and moving ahead, I would like to clarify
two points of definition (and methodology, and logic) about the terms
"research impact" and "scientometric measures":

"Research impact" means the measurable effects of research, including
everything in the following range of measurable effects:

(1) browsed
(2) read
(3) taught
(4) cited
(5) co-cited by authoritative sources
(6) used in other research
(7) applied in practical applications
(8) awarded the Nobel Prize

All of these (and probably more) are objectively measurable indices of
research impact. Research impact is not, and never has been just (4),
i.e., not just citation counts, whether average journal citation ratios
(the ISI "journal impact factor") or individual paper total or annual
citation counts, or individual author total or average or annual
citation counts (though citations are certainly important, in this
family of impact measures).

So when I speak of the multiple regression equation measuring research
impact I mean all of the above (at the very least).

"Scientometric measures" are the above measures. Scientometric analyses
also include time-series analyses, looking for time-based patterns in
the individual curves and the interrelations among measures like the
above ones -- and much more, to be discovered and designed as the
scientometric database consisting of the full text papers, their
reference list and their raw data become available for analysis online.

One of the principal motivations for the suggested coupling of the
research access agenda (open access) with the research impact assessment
agenda (e.g., the UK research Assessment Exercise [RAE]) is that there
is a symbiosis and synergy between the two: Maximizing research access
maximizes potential research impact. Scientometric measures of research
impact can monitor and quantify and make explicit and visible the
causal connection between access and impact at the same time that they
assess it, thereby also making explicit the further all-important
connection between research impact and research funding. It is a
synergy, because the open-access full-text database also facilitates new
developments in scientometric analysis, making the research assessment
more accurate and predictive.

http://www.ecs.soton.ac.uk/~harnad/Hypermail/Amsci/2325.html

Now on to comments on Chris Zielinski's posting:

On Tue, 26 Nov 2002 Chris Zielinski  wrote:

> This is being offered despite Stevan's being "braced for the predictable
> next round of attacks on scientometric impact analysis: 'Citation impact is
> crude, misleading, circular, biassed: we must assess research a better
> way!'". It remains curious why he acquiesces passively to a poor, biassed
> system based on impact analysis rather than searching for "alternative,
> nonscientometric ways of assessing and ranking large bodies of research
> output" - and indeed seeks to dissuade those who might be doing that.

Acquiescing passively to what? There are no alternatives to scientometrics
(as Chris goes on to note implicitly below), just richer, less biassed
scientometrics, which is precisely what I am recommending! Chris writes
as if I were defending the univariate ISI journal-impact factors when I
am arguing for replacing it by a far richer multiple regression equation!

> Those of us working with developing country journals are well aware of the
> inherent biases and vicious circles operating in the world of impact
> factors.

That is, again, the ISI journal-impact factor (a subset of measure 4 in
my [partial] list of 8!).

But if there is indeed a bias against developing country journals
on measure 4, what better remedy for it than to remove the access
barriers on the current visibility, usage and impact of those journals
by self-archiving their contents in OAI-compliant Eprint Archives,
thereby ensuring that they will be openly accessible to every would-be
user with access to the Web!

> The circularity Stevan refers to is "You cannot cite what you
> haven't read, you tend not to read what is not stocked in your library (or
> readily avaialble online), and your library tends not to stock what isn't
> cited".

Indeed. So forget about relying on your library (and the access tolls it
may or may not be able to afford) and make your research openly
accessible for free for all by self-archiving it. And if you are in a
developing country and you need it, help in doing this is available
from the Soros Foundation's Budapest Open Access Initiative:
http://www.soros.org/openaccess/

The ISI journal-impact factor's vicious circularity is part of the
toll-access circle, which is precisely what open-access and
self-archiving are designed to free us from!

> This certainly applies to developing country journals, and there is
> literature to support this (which - paradoxically - I don't have to hand to
> cite), but it also applies everywhere to new journals, local journals and
> many open access products.

All true, and all relevant

Re: UK Research Assessment Exercise (RAE) review

2002-11-26 Thread Jan Velterop
Stevan,

I guess you agree with me (although that is perhaps not that obvious from
parts of your response). My concern remains that there is a widely held
*perception* among authors that impact factors matter enormously. That
perception hampers progress towards open access. Just as the widespread
*perception* that self-archiving is basically self-publishing, or
otherwise dangerously close to breaking copyright law, is hampering
progress with institutional repositories. You and I know that these
perceptions are not reality, but we need to address doubtful researchers
with persuasive and convincing arguments that truly take away those
faulty perceptions.

I am calling upon anybody reading this list who understands and believes
in the benefits of open access to help doing just that (although I'm
sure the majority already do so). It is not a desperate, but still an
uphill struggle, and the more advocates of open access speak up, the
easier it's bound to get.

Jan Velterop


Re: UK Research Assessment Exercise (RAE) review

2002-11-26 Thread informania
This is being offered despite Stevan's being "braced for the predictable
next round of attacks on scientometric impact analysis: 'Citation impact is
crude, misleading, circular, biassed: we must assess research a better
way!'". It remains curious why he acquiesces passively to a poor, biassed
system based on impact analysis rather than searching for "alternative,
nonscientometric ways of assessing and ranking large bodies of research
output" - and indeed seeks to dissuade those who might be doing that.

Those of us working with developing country journals are well aware of the
inherent biases and vicious circles operating in the world of impact
factors. The circularity Stevan refers to is "You cannot cite what you
haven't read, you tend not to read what is not stocked in your library (or
readily avaialble online), and your library tends not to stock what isn't
cited". This certainly applies to developing country journals, and there is
literature to support this (which - paradoxically - I don't have to hand to
cite), but it also applies everywhere to new journals, local journals and
many open access products.

Surely those supporting open access should be against impact-factor driven
ranking systems and be searching actively for less-biassed replacements?
These need not be "nonscientometric", incidentally - no need for the
suggestion of witchcraft. [Impact factors themselves are more than a tad
sociometric - measurements of the behavioural patterns of researchers -
rather than entirely objective. Is the reason someone cited the British
Medical Journal rather than the Bhutan Medical Journal (assuming she had
access to both) because the first BMJ was better, or more prestigious, than
the second BMJ?]

In fact, Stevan mentions "other new online scientometric measures such as
online usage ["hits"], time-series analyses, co-citation analyses and
full-text-based semantic co-analyses, all placed in a weighted multiple
regression equation instead of just a univariate correlation". Indeed,
impact factors are very crude quasi-scientometric and subjective measures
compared even with such simple information (easy to obtain for online media)
as counts of usage - for example, how many articles have been read but not
cited?

All these are indeed worth pursuing and, I would have thought, right on the
agenda of the OA movement.

Chris

Chris Zielinski
Director, Information Waystations and Staging Posts Network
Currently External Relations Officer, HTP/WHO
Avenue Appia, CH-1211, Geneva, Switzerland
Tel: 004122-7914435 Mobile: 0044797-10-45354
e-mail: zielins...@who.int and informa...@supanet.com
web site: http://www.iwsp.org


Re: UK Research Assessment Exercise (RAE) review

2002-11-25 Thread Stevan Harnad
On Mon, 25 Nov 2002, Jan Velterop wrote:

> [Braham of HEFCE's] concern is that the journal, or more
> particularly, the journal's perceived acceptance policy
> upon peer-review, is used as a proxy for quality.

The concern is admirable. Now we must wait to hear what the
alternative candidate for quality-assessment is, against which
journal peer review and journal quality levels are to be compared
as quality-indicators or quality-proxies.

(I can only repeat: it is surely not the re-refereeing of all RAE
submissions by the RAE panels that Braham has in mind. So it would be
interesting to know what he does have in mind! That the scientometric
data and analyses can and should be strengthened -- e.g., by paper-
and author-based citation counts, usage statistics (hits), time-series
analyses, co-citation analyses, and even semantic analyses of the
full-texts -- is uncontested, indeed, that is what I was recommending
that the self-archived full-text corpus would make possible. But what
are the *other* (nonscientometric) ways to assess research quality for
the RAE?)

(By the way, as I can already sense it coming: counting grant income,
and numbers of graduates students, and plotting their respective
citation impacts, etc. is all just more scientometrics, and is exactly what
would go into the RAE-standardized online CVs I recommended, as well
as the multiple regression equation.)

> This acceptance policy can be as strict in on-line journals as in print
> ones, so there would be no reason for [RAE] to equate strict policies with
> those employed by print journals.

Of course not. We are in complete agreement about that. The only handicap
a journal may have is not yet having had the chance to demonstrate
and establish its quality level through its track record. But that is
a liability of all new journal start-ups, and again has nothing to do
with medium (on-paper-only, hybrid, or online-only) nor with economic
model (toll-access or open-access). It is purely a question of quality
(and time).

>sh> But I would be more skeptical about the implication that it is the RAE
>sh> assessors who review the quality of the submissions, rather than the
>sh> peer-reviewers of the journals in which they were published. Some
>sh> spot-checking there might occasionally be, but the lion's share of the
>sh> assessment burden is borne by the journals' quality levels and impact
>sh> factors, not the direct review of the papers by the RAE panel!
>sh> (So the *quality* of the journal still matters: it is the *medium* of
>sh> the journal -- on-paper or online -- that is rightly discounted by
>sh> the RAE as irrelevant.)
>
> The quality of journals matters, but quality is not the same as impact
> factor.

Agreed. But no scientometric measure is the same as quality: Such
measures are correlates or predictors of quality.

> Possibly, journals with the highest impact factors can be seen to
> be -- in general -- of higher quality than those with low impact factors,

Possibly indeed. But we are agreed that journal-impact (i.e., average
citation ratio) is only one of many (scientometric) ways to estimate
quality. Some other ways were listed above.

> but, as one often sees, rankings on the basis of differences that
> run into the single digit percentage points (e.g. IF 2.35 vs IF 2.27)
> are utterly meaningless.

Agreed. Which is another reason why a univariate measure such as journal
citation count needs to be just one among a whole battery of impact
predictors, in a multiple regression equation.

> It is a known phenomenon that impact factors are highly vulnerable
> to manipulation

True, but once they are just one in a battery of predictors, manipulation
will be more detectable; and a whole battery of quasi-independent
predictors is far harder to manipulate. (And online manipulation is also
more readily detectable.)

> and that in just about any given
> journal a minority of articles is commonly responsible for the bulk of
> the citations on which the impact factors are based.

This would immediately become apparent if the regression equation
included both the journal impact factor and the paper's (and author's)
specific citation counts. The (high or low) paper-specific count would
counterbalance the journal-based count, and they could be weighted as
the RAE assessors saw fit (from further scientometric analyses).

> An American medical
> journal will almost always have a very much higher impact factor than its
> European qualitative equivalent, simply because in the medical areas the
> culture 'dictates' that American authors publish in the main in American
> journals and do not cite their European colleagues, whereas European
> authors publish as much in American as in European journals and usually
> cite all relevant literature, be it American or European.

Where that is the case, it too can be scientometrically adjusted for.

> Quality in this example is not easily measurable in terms of impact
> factors.

Not univariate ones.

>sh> (Henc

Re: UK Research Assessment Exercise (RAE) review

2002-11-25 Thread Jan Velterop
>bb> "Where an article is published is an irrelevant issue.  A top
>bb> quality piece of work, in a freely available medium, should get
>bb> top marks. The issue is really that many assessment panels use
>bb> the medium of publication, and in particular the difficulty of
>bb> getting accepted after peer review, as a proxy for quality. But
>bb> that absolutely does not mean that an academic who chooses to
>bb> publish his work in an unorthodox medium should be marked down.
>bb> At worst it should mean that the panel will have to take rather
>bb> more care in assessing it."

On Monday, November 25, 2002, at 03:04 PM, Stevan Harnad wrote:

> A rather complicated statement, but meaning, I guess, that the RAE, is
> assessing quality, and does not give greater weight to paper journal
> publications than to online journal publications. 

Funny how the same words can be read in different ways. When Bahram
says 'medium', I read it as 'journal', 'channel of communication': a
'carrier-neutral' notion, so to speak, not making the distinction at all
between 'print' and 'electronic'. His concern is that the journal, or more
in particular, the journal's perceived acceptance policy upon peer-review,
is used as a proxy for quality. This acceptance policy can be as strict
in on-line journals as in print ones, so there would be no reason for
him to equate strict policies with those employed by print journals.

> But I would be more skeptical about the implication that it is the RAE
> assessors who review the quality of the submissions, rather than the
> peer-reviewers of the journals in which they were published. Some
> spot-checking there might occasionally be, but the lion's share of the
> assessment burden is borne by the journals' quality levels and impact
> factors, not the direct review of the papers by the RAE panel!
> (So the *quality* of the journal still matters: it is the *medium* of
> the journal -- on-paper or online -- that is rightly discounted by
> the RAE as irrelevant.)

The quality of journals matters, but quality is not the same as impact
factor. Possibly, journals with the highest impact factors can be seen to
be -- in general -- of higher quality than those with low impact factors,
but, as one often sees, rankings on the basis of differences that
run into the single digit percentage points (e.g. IF 2.35 vs IF 2.27)
are utterly meaningless. It is a known phenomenon that impact factors
are highly vulnerable to manipulation and that in just about any given
journal a minority of articles is commonly responsible for the bulk of
the citations on which the impact factors are based. An American medical
journal will almost always have a very much higher impact factor than its
European qualitative equivalent, simply because in the medical areas the
culture 'dictates' that American authors publish in the main in American
journals and do not cite their European colleagues, whereas European
authors publish as much in American as in European journals and usually
cite all relevant literature, be it American or European. Quality in
this example is not easily measurable in terms of impact factors.

> (Hence the suggestion that a "top-quality" work risks nothing in being
> submitted to an "unorthodox medium" -- apart from reiterating that
> the medium of the peer-reviewed journal, whether on-line or on-paper,
> is immaterial -- should certainly not be interpreted by authors as RAE
> license to bypass peer-review, and trust that the RAE panel will review
> all (or most, or even more than the tiniest proportion of submissions
> for spot-checking) directly! Not only would that be prohibitively expensive
> and time-consuming, but it would be an utter waste, given that peer
> review has already performed that chore once already!

However, if 'unorthodox medium' means 'new journal with an unorthodox
publishing model' (after all, since most journals have an on-line edition
nowadays, being electronic by itself would hardly have been described by
Bahram as 'unorthodox'), then authors of top-quality work are perceived to
take a risk by publishing in them, for these unorthodox new journals will
not have an impact factor yet. This is not to say that articles in the new
open access journals are not cited as often as in conventional journals
-- on the contrary: we have strong indications at BioMed Central that they
are actually cited a great deal more often than similar articles published
conventionally. The system of impact factors, however, is stacked against
new journals and has a considerable bias toward entrenched journals
and their toll-gate models. Fortunately, this is only an irritating,
but temporary problem as the rates at which articles published in BMC
open access journals are cited, will ensure high impact factors once
the Impact Factory deems the time ripe to calculate them.

>jv> HEFCE clearly recognises the flaws of the RAE methodology used
>jv> hitherto, which is the first step towa

Re: UK Research Assessment Exercise (RAE) review

2002-11-25 Thread David Goodman
Yes, it clearly says that the i.f. of the journal should not be counted.
But it also says that many panels do count it anyway.

This highlights the difficulty in using impact factors correctly.

As a senior administrator at a university I know of put it (a few years
back, not an exact quote) "of course promotion committees read the
papers and make their own judgments about the quality rather than just see
where it was published. However, in doubtful cases, they sometimes
have been known (smile)..."
I do not know the actual operations  of the people on
the UK panels, but  human experience suggests that those
researchers who rely on the formally stated criteria do so at their own
risk. (It should be possible to prospectively and reprospectively measure
this, also).


There are three levels of judging quality

a/ judging the quality of a particular piece of work: this is the job of
the referees, but a specialist in the field should be able to confirm the
accuracy of the referee's judgment.

b/ judging the quality of a particular researcher's work. This requires
judging the cumulative effect and trend of the body of papers, and an
extrapolation to what is likely to be produced in the future. I am not
aware of any studies of this, but I have not looked for them.
Anecdotally, we all know of people who have been rejected for tenure at a
particular university who have gone on to do brilliant work elsewhere.
This represents the failability (and biases) of human judgement. It should
be possible to measure the frequency of this with the control being those
who do get tenure at the same place. It should also be possible to test
possible objective measures to see what they would have predicted.

One complication is that an important factor in the true quality of a
researcher is also  the success of that person's students and postdocs.
This is measurable too, but it requires taking
account of a longer time scale reprospectively or prospectively.

c/ judging the quality of a particular department's work. This is just the
sum of its researchers. However it is even more affected by the
department's students later careers.
(one can similarly judge a university or a nation . etc.)

-

A different matter is judging the value of a field of work: i.e., is it
worthwhile giving money to this specialty, or to a researcher, productive
or not, working in this specialty. Short range impact factors have no role
here,  unless one is concerned only about increasing short-term
productivity. This is the part where the history of science shows that
people do particularly poorly. It is not merely theoretical--the award of
grants and so on is often based on a judgment of this. It relies on
politics and prejudice more than on science, and I know of no relevant
measurement--except the still short-term possibility Stevan mentioned, of
looking for fields that are just beginning to affect other fields.

As I understand it, to the extent that people use i.f. in this, they are
making the implicit judgement that fields such as classical biology are
not worthy of funding, and consequently that departments devoted to this
are not worthy of funding. I consider this a political not scientific use,
and totally invalid.

 On Mon, 25 Nov 2002,
Stevan Harnad wrote:

> On Mon, 25 Nov 2002, Jan Velterop wrote:
>
> >   "Where an article is published is an irrelevant issue.  A top
> >   quality piece of work, in a freely available medium, should get
> >   top marks. The issue is really that many assessment panels use
> >   the medium of publication, and in particular the difficulty of
> >   getting accepted after peer review, as a proxy for quality. But
> >   that absolutely does not mean that an academic who chooses to
> >   publish his work in an unorthodox medium should be marked down.
> >   At worst it should mean that the panel will have to take rather
> >   more care in assessing it."

Dr. David Goodman
Biological Sciences Bibliographer
Princeton University Library
dgood...@princeton.edu


Re: UK Research Assessment Exercise (RAE) review

2002-11-25 Thread Stevan Harnad
On Mon, 25 Nov 2002, Jan Velterop wrote:

>  A propos of the Research Assessment Exercise, the policy director
>  (Bahram Bekhradnia) of the Higher Education Funding Council, which
>  carries out the RAE, recently sent me this response to a question some
>  of our authors are asking and worrying about the possible significance
>  of a journal's Impact Factor in the context of the RAE:
>
>   "Where an article is published is an irrelevant issue.  A top
>   quality piece of work, in a freely available medium, should get
>   top marks. The issue is really that many assessment panels use
>   the medium of publication, and in particular the difficulty of
>   getting accepted after peer review, as a proxy for quality. But
>   that absolutely does not mean that an academic who chooses to
>   publish his work in an unorthodox medium should be marked down.
>   At worst it should mean that the panel will have to take rather
>   more care in assessing it."

A rather complicated statement, but meaning, I guess, that the RAE, is
assessing quality, and does not give greater weight to paper journal
publications than to online journal publications. This is nothing new;
it has been its announced policy since at least 1995:

http://www.ecs.soton.ac.uk/~harnad/Hypermail/Theschat/0033.html

HEFCE Circular RAE96 1/94 para 25c states:

"In the light of the recommendations of the Joint Funding Councils'
Libraries Review Group Report (published in December 1993) refereed
journal articles published through electronic means will be treated
on the same basis as those appearing in printed journals."

This is the result of adopting the following recommendation in Librev
Chapter 7:

"289. To help promote the status and acceptability of electronic
journals, the Review Group also recommends that the funding councils
should make it clear that refereed articles published electronically
will be accepted in the next Research Assessment Exercise on the
same basis as those appearing in printed journals."

But I would be more skeptical about the implication that it is the RAE
assessors who review the quality of the submissions, rather than the
peer-reviewers of the journals in which they were published. Some
spot-checking there might occasionally be, but the lion's share of the
assessment burden is borne by the journals' quality levels and impact
factors, not the direct review of the papers by the RAE panel!
(So the *quality* of the journal still matters: it is the *medium* of
the journal -- on-paper or online -- that is rightly discounted by
the RAE as irrelevant.)

(Hence the suggestion that a "top-quality" work risks nothing in being
submitted to an "unorthodox medium" -- apart from reiterating that
the medium of the peer-reviewed journal, whether on-line or on-paper,
is immaterial -- should certainly not be interpreted by authors as RAE
license to bypass peer-review, and trust that the RAE panel will review
all (or most, or even more than the tiniest proportion of submissions for
spot-checking) directly! Not only would that be prohibitively expensive
and time-consuming, but it would be an utter waste, given that peer
review has already performed that chore once already!

>  HEFCE clearly recognises the flaws of the RAE methodology used
>  hitherto, which is the first step towards a more satisfactory
>  assessment system. What is not clear to me is the question whether
>  your suggested reform will indeed be saving time and money. It seems to
>  me that just adding Impact Factors of articles is indeed the shortcut
>  (proxy for quality) that Bahram refers to, and that anything else will
>  take more effort. I don't pretend to have any contribution to make
>  to that discussion on efficiency of the assessment methodology, though.

I couldn't quite follow this. Right now, most of the variance in the RAE
rankings is predictable from the journal impact factors of the submitted
papers. That, in exchange for each university department's preparing
a monstrously large portfolio at great time and expense (including
photocopies of each paper!).

Since I seriously doubt that Bahram meant replacing impact ranking by
direct re-review of the all the papers by RAE assessors, I am not quite
sure what you think he had in mind! (You say "just adding Impact Factors
of articles is indeed the shortcut" but adding them to what, how? If
those impact factors currently do most of the work, it is not clear
that they need to be *added* to the current wasteful portfolio! Rather,
they, or, better still, even richer and more accurate scientometric
measures, need to be derived directly. Directly from what?

One possibility would be for the RAE to directly data-mine, say ISI's
Web of Science: http://wos.mimas.ac.uk/. For that, the UK would need
a license to trawl, but that's no problem (we already have one). One
problem might be that ISI's coverage is incomplete -- only about 7500 of
the pl

Re: UK Research Assessment Exercise (RAE) review

2002-11-25 Thread Jan Velterop
Stevan,

Thanks for your recap and apologies for not always having the time to read
everything you contribute to the discussions in detail.

A propos of the Research Assessment Exercise, the policy director (Bahram
Bekhradnia) of the Higher Education Funding Council, which carries out the
RAE, recently sent me this response to a question some of our authors are
asking and worrying about the possible significance of a journal's Impact
Factor in the context of the RAE:

"Where an article is published is an irrelevant issue.  A top quality piece
of work, in a freely available medium, should get top marks. The issue is
really that many assessment panels use the medium of publication, and in
particular the difficulty of getting accepted after peer review, as a proxy
for quality. But that absolutely does not mean that an academic who chooses
to publish his work in an unorthodox medium should be marked down.  At worst
it should mean that the panel will have to take rather more care in
assessing it."

HEFCE clearly recognises the flaws of the RAE methodology used hitherto,
which is the first step towards a more satisfactory assessment system. What
is not clear to me is the question whether your suggested reform will indeed
be saving time and money. It seems to me that just adding Impact Factors of
articles is indeed the shortcut (proxy for quality) that Bahram refers to,
and that anything else will take more effort. I don't pretend to have any
contribution to make to that discussion on efficiency of the assessment
methodology, though.

Best,

Jan


Re: UK Research Assessment Exercise (RAE) review

2002-11-25 Thread Stevan Harnad
On Mon, 25 Nov 2002, Jan Velterop wrote:

>   If one assesses an institute's productivity by the papers from its
>   researchers, and one rates those papers with the help of journal
>   impact factors, is it not the case that one should expect the results
>   to be in line with the citation counts for those papers? Is it me
>   or is there a circular argument here?

You're quite right, Jan, and that was precisely the point of my
recommendation that the RAE should be transformed into continuous
online submission and assessment of online CVs linked to the
online full-texts of each researcher's peer-reviewed research
articles, self-archived in their university's Eprint Archive:
http://www.ecs.soton.ac.uk/~harnad/Hypermail/Amsci/2373.html

Because most of the variance in the RAE rankings is determined by citation
impact already anyway!

Hence this simple, simplifying transformation would make the RAE cheaper,
faster, easier, far less time-wasting for both researchers and assessors,
and more accurate (by adding richer online measures of impact, e.g.,
direct paper/author impact instead of indirect journal impact, plus
many other new online scientometric measures such as online usage
["hits"], time-series analyses, co-citation analyses and full-text-based
semantic co-analyses, all placed in a weighted multiple regression
equation instead of just a univariate correlation).

Plus, as a bonus, this RAE change, in exchange for making it cheaper,
faster, easier, far less time-wasting for both researchers and assessors,
and more accurate, would also help hasten open access -- in the UK as
well as world-wide.

The sequence was:

(i) I conjectured that the RAE might as well go ahead and downsize and
streamline itself in this way, dropping all the needless extra baggage
of the on-paper returns, because the outcome is already determined mostly
by impact ranking anyway:

"(5) If someone did a statistical correlation on the numerical outcome
of the RAE, using the weighted impact factors of the publications of
each department and institution, they would be able to predict the
outcome ratings quite closely. (No one has done this exact statistic,
because the data are implicit rather than explicit in the returns,
but it could be done, and it would be a good idea to do it, just
to get a clear indication of where the RAE stands right now, before
the simple reforms I am recommending.)"

(ii) Then commentators started to respond, including Charles Oppenheim,
gently pointing out to me that I am under-informed, and there is no need
for me to speculate about this, because the post-hoc analyses HAVE been
done, and there is indeed a strong positive correlation between citation
impact and RAE outcome!

(iii) Peter Suber (and others) cited further confirmatory studies.

(iv) So there is nothing circular here. The point was not to
RECOMMEND using citation impact, by circularly demonstrating that
citation impact was being used already.

(v) The point was to downsize, streamline and at the same time strengthen
the RAE by making its (existing) dependence on impact ranking more direct
and explicit and efficient,

(vi) and at the same time enriching its battery of potential impact
measures scientometrically, increasing its predictive power

(vii) while saving time and money

(viii) and leading the planet toward the long overdue objective
of open access to all of its peer-reviewed research output.

(The only recompense I ask for all this ritual repetition and recasting
and clarification I have to keep doing at every juncture is that the
the day should come, and soon!)

[I am braced for the predictable next round of attacks on scientometric
impact analysis: "Citation impact is crude, misleading, circular,
biassed: we must assess research a better way!" And ready to welcome
these critics (as I do the would-be reformers of peer review) to go
ahead and do research on alternative, nonscientometric ways of assessing
and ranking large bodies of research output, and to let us all know what
they are -- once they have found them, tested them and shown them to
predict at least as well as scientometric impact analysis. But in the
meanwhile, I will invite these critics (as I do the would-be reformers
of peer review) to allow these substantial optimizations of the existing
system to proceed apace, rather than holding them back for better (but
untested, indeed unknown) alternatives. For in arguing against these
optimizations of the existing system, they are not supporting a better
way: they are merely arguing for doing what we are doing already anyway,
in a much more wasteful way.]

Amen,

Stevan Harnad

NOTE: A complete archive of the ongoing discussion of providing open
access to the peer-reviewed research literature online is available at
the American Scientist September Forum (98 & 99 & 00 & 01 & 02):


http://amsci-forum.amsci.org/archives/American-Scientist-Open-Access-Forum.html
or
http:

Re: UK Research Assessment Exercise (RAE) review

2002-11-25 Thread Jan Velterop
If one assesses an institute's productivity by the papers from its
researchers, and one rates those papers with the help of journal impact
factors, is it not the case that one should expect the results to be in line
with the citation counts for those papers? Is it me or is there a circular
argument here?

Jan Velterop

-Original Message-
From: Peter Suber [mailto:pet...@earlham.edu]
Sent: 25 November 2002 01:59
To: american-scientist-open-access-fo...@listserver.sigmaxi.org
Subject: Re: UK Research Assessment Exercise (RAE) review

In the recent postings on RAE ratings and scientometrics, I don't believe
I've seen anyone cite this piece of research:

Andy Smith and Mike Eysenck, "The correlation between RAE ratings and
citation counts in psychology" (June 2002)
http://psyserver.pc.rhbnc.ac.uk/citations.pdf

The authors' summary:  We counted the citations received in one year (1998)
by each staff member in each of 38 university psychology departments in the
United Kingdom. We then averaged these counts across individuals within
each department and correlated the averages with the Research Assessment
Exercise (RAE) grades awarded to the same departments in 1996 and 2001. The
correlations were extremely high (up to +0.91). This suggests that whatever
the merits and demerits of the RAE process and citation counting as methods
of evaluating research quality, the two approaches measure broadly the same
thing. Since citation counting is both more costeffective and more
transparent than the present system and gives similar results, there is a
prima facie case for incorporating citation counts into the process, either
alone or in conjunction with other measures. Some of the limitations of
citation counting are discussed and some methods for minimising these are
proposed. Many of the factors that dictate caution in judging individuals
by their citations tend to average out when whole departments are compared.

  Peter
--
Peter Suber, Professor of Philosophy
Earlham College, Richmond, Indiana, 47374
Email pet...@earlham.edu
Web http://www.earlham.edu/~peters

Editor, Free Online Scholarship Newsletter
http://www.earlham.edu/~peters/fos/
Editor, FOS News blog
http://www.earlham.edu/~peters/fos/fosblog.html


Re: UK Research Assessment Exercise (RAE) review

2002-11-25 Thread Peter Suber

In the recent postings on RAE ratings and scientometrics, I don't believe
I've seen anyone cite this piece of research:

Andy Smith and Mike Eysenck, "The correlation between RAE ratings and
citation counts in psychology" (June 2002)
http://psyserver.pc.rhbnc.ac.uk/citations.pdf

The authors' summary:  We counted the citations received in one year (1998)
by each staff member in each of 38 university psychology departments in the
United Kingdom. We then averaged these counts across individuals within
each department and correlated the averages with the Research Assessment
Exercise (RAE) grades awarded to the same departments in 1996 and 2001. The
correlations were extremely high (up to +0.91). This suggests that whatever
the merits and demerits of the RAE process and citation counting as methods
of evaluating research quality, the two approaches measure broadly the same
thing. Since citation counting is both more costeffective and more
transparent than the present system and gives similar results, there is a
prima facie case for incorporating citation counts into the process, either
alone or in conjunction with other measures. Some of the limitations of
citation counting are discussed and some methods for minimising these are
proposed. Many of the factors that dictate caution in judging individuals
by their citations tend to average out when whole departments are compared.

 Peter
--
Peter Suber, Professor of Philosophy
Earlham College, Richmond, Indiana, 47374
Email pet...@earlham.edu
Web http://www.earlham.edu/~peters

Editor, Free Online Scholarship Newsletter
http://www.earlham.edu/~peters/fos/
Editor, FOS News blog
http://www.earlham.edu/~peters/fos/fosblog.html


Re: UK Research Assessment Exercise (RAE) review

2002-11-23 Thread Stevan Harnad
On Fri, 22 Nov 2002, David Goodman wrote:

> I do not think that the comparison of the eventual value of the
> different specialties of scientific research can be judged at the
> time the research is being done.

"Can only be predicted with XX% reliability" is the statistically sound
way of putting it. And both XX and the time-span will vary (with time,
and field).

Assessors for research funding don't ask for 100% predictive accuracy. They
just want something like "Research/Researcher A is more likely than B"
(when funds are finite).

> That requires historical knowledge as well as scientometrics.

Yes, but hindsight is not a predictor (unless it picks out a predictive
pattern or index for the next time).

> This does imply a certain humility about the ability to use current
> knowledge as a valid basis for long term science policy.

I don't know about long-term science policy. The RAE just wants some
objective help in disbursing support for the next few years.
http://www.ecs.soton.ac.uk/~harnad/Hypermail/Amsci/2373.html

> Your  second derivative technique, if the data are sufficiently
> accurate to support it, sounds like an exceeding nice way of measuring the
> potential short-term rise of a scientific field (or department). I would
> be reluctant to extrapolate very far into the future with such methods.

Extrapolate no further than your time-series correlations suggest you
have a statistical basis for extrapolating.

> For example, as judged by apparent current productivity, and its apparent
> valuation by the scientific world in general, scientometrics does not
> show very well. You and I know better, of course. :)

The time-line for the betting on scientometrics is still very
short, the field being new and its database growing. Its day is fast
coming, though, and open-access (along with the scientometric analzers
like http://citebase.eprints.org/ ) will help usher it in.

Stevan Harnad


Re: UK Research Assessment Exercise (RAE) review

2002-11-23 Thread David Goodman
Yes,  I do not think that the comparison of the eventual value of the
different specialties of scientific
research can be judged at the time the research is being done.
That requires historical knowledge as well as scientometrics.
This does imply a certain humility about the ability to use current
knowledge as a valid basis for long term science policy.
 The history of science offers abundant examples from all periods.

Your  second derivative technique, if the data are sufficiently
accurate to support it, sounds like an exceeding nice way of measuring the
potential short-term rise of a scientific field (or department). I would
be reluctant to extrapolate very far into the future with such methods.

The quality of the work within a field
as done by different scientists is
another matter, and is a proper area for contemporaneous
measurement. It permits
the concentration of support on the apparently best work in all special
areas, without rejecting any sub-discipline's approach.

For example, as judged by apparent current productivity, and its apparent
valuation by the scientific world in general, scientometrics does not
show very well. You and I know better, of course. :)

 On
Thu, 21 Nov 2002, Stevan Harnad wrote:

> On Thu, 21 Nov 2002, David Goodman wrote:
>
> > I question whether the members of any scientific field are qualified for
> > judging quality in other scientific fields, except by the use of common
> > sense and of objective measures, such as scientometric ones.
>
> Agreed.
>
> > I think librarians and other information science specialists are at
> > least as qualified in both these aspects as others are.
>
> Agreed. (For judging other disciplines. I expect that setting up a
> national pan-disciplinary research assessment exercise probably draws on
> a number of different lines of expertise, some of it having to do with
> research methodology, some with statistics, some with research funding,
> perhaps some with history and sociology of science and scholarship.)
>
> > I further wonder whether the members of any scientific field are not in
> > practice disqualified for evaluating departments in their own field by
> > the inevitable effects of the old boy network.
>
> Maybe not disqualified, but should perhaps have their numbers
> counterbalanced by disinterested but knowledgeable parties.
>
> > Not that this should disprove the argument, but I will mention that the
> > proposal to evaluate the total scientific ouput of a group, good or bad,
> > rather than just the best, will be eagerly supported by the publishers
> > of the second-rate journals in which the lesser work appears.
>
> Good point. But that's another reason why the quality-level of the journal
> should be entered into the regression equation too. Salami-slicing should
> have a scientometric signature too, which scientometric analysis should
> be able to detect and weight accordingly.
>
> > The particular improvement which is necessary is a way of measuring the
> > influence not on the next years' papers, but on the next generations'.
> > Thus I question the use of the current measurements for evaluating
> > immediate research productivity for evaluating the actual value of
> > the research.
>
> And your alternative contender is...?
>
> Once we have a full-text open-access database, with citation links,
> co-citation analyses, hit-rates, time-series data, even inverted co-text
> analyses, the predictive index could turn up as something as abstract
> as the 2nd derivative or the latency to peak of the citation or the hit
> growth curve. Unless you are suggesting that the only way to predict is
> to retrodict (in which case the research assessment exercise's outcome
> may come rather too late to reward the winning researcher...).
>
> Stevan Harnad
>

Dr. David Goodman
Biological Sciences Bibliographer
Princeton University Library
dgood...@princeton.edu


Re: UK Research Assessment Exercise (RAE) review

2002-11-21 Thread Stevan Harnad
On Thu, 21 Nov 2002, David Goodman wrote:

> I question whether the members of any scientific field are qualified for
> judging quality in other scientific fields, except by the use of common
> sense and of objective measures, such as scientometric ones.

Agreed.

> I think librarians and other information science specialists are at
> least as qualified in both these aspects as others are.

Agreed. (For judging other disciplines. I expect that setting up a
national pan-disciplinary research assessment exercise probably draws on
a number of different lines of expertise, some of it having to do with
research methodology, some with statistics, some with research funding,
perhaps some with history and sociology of science and scholarship.)

> I further wonder whether the members of any scientific field are not in
> practice disqualified for evaluating departments in their own field by
> the inevitable effects of the old boy network.

Maybe not disqualified, but should perhaps have their numbers
counterbalanced by disinterested but knowledgeable parties.

> Not that this should disprove the argument, but I will mention that the
> proposal to evaluate the total scientific ouput of a group, good or bad,
> rather than just the best, will be eagerly supported by the publishers
> of the second-rate journals in which the lesser work appears.

Good point. But that's another reason why the quality-level of the journal
should be entered into the regression equation too. Salami-slicing should
have a scientometric signature too, which scientometric analysis should
be able to detect and weight accordingly.

> The particular improvement which is necessary is a way of measuring the
> influence not on the next years' papers, but on the next generations'.
> Thus I question the use of the current measurements for evaluating
> immediate research productivity for evaluating the actual value of
> the research.

And your alternative contender is...?

Once we have a full-text open-access database, with citation links,
co-citation analyses, hit-rates, time-series data, even inverted co-text
analyses, the predictive index could turn up as something as abstract
as the 2nd derivative or the latency to peak of the citation or the hit
growth curve. Unless you are suggesting that the only way to predict is
to retrodict (in which case the research assessment exercise's outcome
may come rather too late to reward the winning researcher...).

Stevan Harnad


Re: UK Research Assessment Exercise (RAE) review

2002-11-21 Thread David Goodman
Stevan asks for my candidate for the true impact analysis:
I agree with him that it should be scientometric, and like him,
I think the remedy for the currently practiced bad scientometrics to be better 
scientometrics..
The particular improvement which is necessary is a way of measuring the 
influence not on the next years' papers, but on the next generations'.  Thus I 
question the use of the current measurements for evaluating immediate research 
productivity for evaluating the actual value of the research.

Stevan also suggests that librarians are not qualified for evaluating 
departments, just for evaluating journals. I question whether the members of 
any scientific field are qualified for judging quality in other scientific 
fields, except by the use of common sense and of objective measures, such as 
scientometric ones. I think librarians and other information science 
specialists are at least as qualified in both these aspects as others are.  I 
further wonder whether the members of any scientific field are not in practice 
disqualified for evaluating departments in their own field by the inevitable 
effects of the old boy network. The one field which I do not trust information 
scientists to evaluate is information science.

Not that this should disprove the argument, but I will mention that the 
proposal to evaluate the total scientific ouput of a group, good or bad, rather 
than just the best, will be eagerly supported by the publishers of the 
second-rate journals in which the lesser work appears.

Dr. David Goodman
Princeton University Library
dgood...@princeton.edu


>


Re: UK Research Assessment Exercise (RAE) review

2002-11-21 Thread Leslie Carr

The latest paper (with many of the prior citations) appears to be
"Use of citation analysis to predict the outcome of the 2001 Research
Assessment Exercise for Unit of Assessment (UoA) 61: Library and
Information Management" available at
http://informationr.net/ir/6-2/paper103.html

Les Carr

---


From: Charles Oppenheim 
Organization: Loughborough University
Subject: Re: UK Research Assessment Exercise (RAE) review

There have been many studies over the years (primarily authored by one C.
Oppenheim, but also by others) demonstrating a statistically significant
correlation between citation counts by academics returned for the RAE
and their department's eventual RAE scores.  These studies cover hard
science, soft science and humanities;  not sure if any studies have been
done in engineering subjects though.

Professor Charles Oppenheim
Department of Information Science
Loughborough University
Loughborough
Leics LE11 3TU


Re: UK Research Assessment Exercise (RAE) review

2002-11-21 Thread Charles Oppenheim
There have been many studies over the years (primarily authored by one C.
Oppenheim, but also by others) demonstrating a statistically significant
correlation between citation counts by academics returned for the RAE and
their department's eventual RAE scores.  These studies cover hard science,
soft science and humanities;  not sure if any studies have been done in
engineering subjects though.

Charles

Professor Charles Oppenheim
Department of Information Science
Loughborough University
Loughborough
Leics LE11 3TU
01509-223065
(fax) 01509-223053


Re: UK Research Assessment Exercise (RAE) review

2002-11-20 Thread Stevan Harnad
On Wed, 20 Nov 2002, David Goodman wrote:

> I consider the impact factor (IF) properly used as a valid measure
> in comparing of journals; I also consider the IF properly used as a
> possibly valid measure of article quality. But either use has many
> possible interfering factors to consider, and these measurements have
> been used in highly inappropriate ways in the past, most notoriously in
> previous UK RAEs.

With all due respect, comparing journals is part of the librarian's art,
but comparing departments and assessing research seems to me to fall
within another artform...

> Stevan mentions one of the problems. Certainly the measure of the impact
> of an individual article is more rational for assessing the quality of
> the article than measuring merely the impact of the journal in which
> it appears. This can be sufficiently demonstrated by recalling that any
> journal necessarily contains articles of a range of quality.

The direct, exact measure is preferable to the indirect, approximate one
for all the reasons that direct, exact estimates, when available, are
preferable to indirect, approximate ones. But the situation is not as
simple as that. The right scientometric model here (at least to a first
approximation) is multiple regression: We have many different kinds of
estimates of impact; each might be informative, when weighted by its
degree of predictiveness (i.e., the percentage of the total variance
that it can account for).

Yes, on the face of it, a specific paper's citation count seems a better
estimate of impact than the average citation count of the journal in
which it appeared. But journals establish their quality standards across
time and a broad sample, and the reasons for one particular paper's
popularity (or unpopularity) might be idiosyncratic.

So in the increasingly long, rich and diverse regression equation for
impact, direct individual paper impact should certainly be given the
greatest prima facie weight, but the impact factor of the journal it
appears in should not be sneezed at either. It (like the impact factor
of the author himself) might add more useful and predictive information
to the regression equation.

The real point is that none of this can be pre-judged a priori. As in
psychometrics, where the psychological tests must be "validated" against
the criterion they allegedly measure, all of the factors contributing to
the scientometric regression equation for impact need to be independently
validated.

> More attention is needed to the comparison of fields. The citation
> patterns in different subject fields varies, not just between broad
> subject fields but within them.

Of course. And field-based (and subfield-based) weightings and patterns
would be among the first ones one would look to validate and adjust: Not
only so as not to compare apples with oranges, but again to get maximum
predictiveness and validity out of the regression. None of this argues
against scientometric regression equations for impact; it merely argues
for making them rich, diverse, and analytic.

> In the past, UK RAEs used a single
> criterion of journal impact factor in ALL academic fields; this was
> patently absurd (just compare the impact factors of journals in math
> with those in physics, or those in ecology with those in biochemistry).
> To the best of my knowledge they have long stopped this. (This incorrect
> use did much to decrease the  repute of this measure, even when correctly
> used.)

I doubt this was ever quite true of the RAE. But in any case, it does not
militate against scientometric analysis of impact in any way? It merely
underscores that naive, unidimensional analyses are unsatisfactory. It
is precisely this impoverished, unidimensional approach for which an
online open-access, full-text, citation-interlinked refereed literature
across all disciplines would be the antidote!

> In comparing different departments, the small scale variation between
> subjects specialisms can yield irrelevant comparisons, because few
> departments have such a large number of individuals that they cover the
> entire range of their subject field.

But this is a scientometric point you are making, and the remedy is
better scientometrics (not something else, such as having Socrates read
and weigh everything for us!): To vary the saying about critics of
metaphysics: "Show me someone who wishes to destroy scientometrics and
I'll show you a scientometrician with a rival system."

> I'll use ecology as an example:
> essentially all the members of my university's department [Ecology and
> Evolutionary Biology] work in mathematical ecology, and we think we are
> the leading department in the world. Most ecologists work in more applied
> areas. The leading journals of mathematical ecology have relatively lower
> impact factors, as this is a very small field. This can be taken into
> account, but in a relatively small geopolitical area like the UK, there
> may be very few truly comparable departments in

Re: UK Research Assessment Exercise (RAE) review

2002-11-20 Thread David Goodman
I consider the impact factor (IF) properly used as a valid measure
in comparing of journals; I also consider the IF properly used as a
possibly valid measure of article quality. But either use has many
possible interfering factors to consider, and these measurements have
been used in highly inappropriate ways in the past, most notoriously in
previous UK RAEs.

Stevan mentions one of the problems. Certainly the measure of the impact
of an individual article is more rational for assessing the quality of
the article than measuring merely the impact of the journal in which
it appears. This can be sufficiently demonstrated by recalling that any
journal necessarily contains articles of a range of quality.

More attention is needed to the comparison of fields. The citation
patterns in different subject fields varies, not just between broad
subject fields but within them. In the past, UK RAEs used a single
criterion of journal impact factor in ALL academic fields; this was
patently absurd (just compare the impact factors of journals in math
with those in physics, or those in ecology with those in biochemistry).
To the best of my knowledge they have long stopped this. (This incorrect
use did much to decrease the  repute of this measure, even when correctly
used.)

In comparing different departments, the small scale variation between
subjects specialisms can yield irrelevant comparisons, because few
departments have such a large number of individuals that they cover the
entire range of their subject field. I'll use ecology as an example:
essentially all the members of my university's department [Ecology and
Evolutionary Biology] work in mathematical ecology, and we think we are
the leading department in the world. Most ecologists work in more applied
areas. The leading journals of mathematical ecology have relatively lower
impact factors, as this is a very small field. This can be taken into
account, but in a relatively small geopolitical area like the UK, there
may be very few truly comparable departments in many fields. It certainly
cannot be taken into account in a mechanical fashion, and the available
scientometric techniques are not adequate to this level of analysis.

The importance of a paper is certainly reflected in its impact, but not
directly in its impact factor. It is not the number of publications that
cite it which is the measure, but the importance of the publications that
cite it. This is inherently not a process that can be analyzed on  a
current basis.

There is a purpose in looking at four papers only: in some fields of
the biomedical sciences in particular, it is intended to discourage
the deliberate splitting of papers into many very small publications,
with the consequence that in some fields of biomedicine a single person
might have dozens in a year, adding to the noise in the literature.
One could also argue that a researcher should be judged by
the researcher's best work, because the best work is what primarily
contributes to the progress of science.

In most other respects I agree with Stevan. I will emphasize that the
publication of scientific papers in the manner he has long advocated will
lead to the posiblility of more sophisticated scientometrics. This will
provide data appropriate for analysis by those who know the techniques,
the subject, and the academic organization. The data obtainable from
the current publication system are of questionable usefullness for this.

Dr. David Goodman
Biological Sciences Bibliographer
Princeton University Library
dgood...@princeton.edu


Re: UK Research Assessment Exercise (RAE) review

2002-11-20 Thread Stevan Harnad
On Wed, 20 Nov 2002, [identity removed] wrote:

> Dear Stevan,
>
> We are running a special report on the review of the RAE
> and you were one of the people suggested as having more lively,
> radical ideas about how research should be assessed in the future [rather
> than how the funding should be allocated]. I think HEFCE are a little
> disappointed by the replies they have had so far, which seem
> more about tweaking the current system than thinking 'out of the box'. We
> are asking a range of people what they would suggest. Would it be possible
> to talk to you about your ideas on this or perhaps you could email me
> a few words on how you think the current system can be changed?
>
> Best wishes [identity removed]

Happy to oblige.

To summarize, the UK is in a unique position -- for the very reason that
it is the only country with a national research assessment exercise like
the RAE -- to do two very closely related things in concert, with three
very likely and very positive outcomes:

(i) It will give the UK RAE a far more effective and sensitive measure
of research productivity and impact, at far less cost (both to the
RAE and to the universities preparing their RAE submissions).

(ii) Besides strengthening the assessment of UK research, it will
also greatly strengthen the uptake and impact of UK research, by
increasing its visibility, accessibility and usage.

(iii) At the same time, the UK RAE will thereby set an example to the
rest of the world that will surely be emulated, in both respects:
research assessment and research access.

The proposal is quite simple, though I will spell it out as a series
of 20 closely connected points:

(1) We already have an RAE, every 4 years.

(2) It costs a great deal of time and energy (time and energy that
could be used to actually do research, rather than preparing and
assessing RAE returns) to prepare and assess, for both universities
and assessors.

(3) It is no secret that for most areas of research, the single most
important and predictive measure of research impact is the so-called
"impact factor" -- the number of times a work has been cited (hence used)
by other research papers. This is a measure of the importance and uptake
of that research.

(4) The impact factor is used very indirectly in the RAE: Researchers each
submit 4 publications for the 4-year interval, and these are (informally)
weighted by the impact factor of the peer-reviewed journal in which
they appeared. (For books or other kinds of publications, see below;
in general, peer-reviewed journal- or conference-papers are the coin of
the research realm, especially in scientific disciplines.)

(5) If someone did a statistical correlation on the numerical outcome of
the RAE, using the weighted impact factors of the publications of each
department and institution, they would be able to predict the outcome
ratings quite closely. (No one has done this exact statistic, because
the data are implicit rather than explicit in the returns, but it could
be done, and it would be a good idea to do it, just to get a clear
indication of where the RAE stands right now, before the simple reforms
I am recommending.)

(6) There is no reason why the RAE should be based only on the impact
factors of 4 publications per researcher, nor why it should be weighted
by the impact factor of the journal in which it appeared, rather than by
the exact impact of each publication itself. (On average the two will
agree, but there is no reason to rely on blunt-instrument averages if
we can use a sharper, exact instrument: A researcher's individual paper
may have a much higher -- or lower -- impact than the average impact of
the journal in which it appears.)

(7) Nor is there any reason why the RAE should be done, with great
effort and expense, every 4 years!

(8) Since the main factor in the RAE outcome ratings is research impact,
there is no reason whatsoever why research impact should not be
continuously assessed -- and directly, rather than indirectly, via the
the true impact factor of the publication (or the author!), rather
than merely the journal's average impact factor.

(9) And there is now not only a method to (a) continuously assess full
UK research impact, and not only get this done (b) incomparably more
cheaply and less effortfully for all involved, while at the same time
making it (c) more sensitive and accurate in estimating the true impact
of the research, but doing the RAE this new way will also have a dramatic
effect on the magnitude of UK research impact itself, (d) increasing
research visibility, usage, citation and productivity dramatically,
simply by maximizing its accessibility.

(10) The method in question is to implement the RAE henceforth online
only, and the only two critical elements are (1) the submission of a
RAE-standardized online CV by every researcher and (2) a link in each
CV between every published paper -- books discussed separately below --
and the full digit

Re: UK Research Assessment Exercise (RAE) review

2002-10-28 Thread Stevan Harnad
This is a response to the HEFCE "invitation to contribute" recommendations
for restructuring the RAE http://www.hefce.ac.uk/research/

I have written two papers on how the RAE might be greatly improved in
its assessment accuracy and at the same time made far less effortful
and costly -- while (mirabile dictu) doing a great indirect service
to research and researchers, both in the UK and in the rest of the
scholarly/scientific world as well:

Harnad, S. (2001) "Research access, impact and assessment." Times Higher
Education Supplement 1487: p. 16.
http://cogprints.soton.ac.uk/documents/disk0/00/00/16/83/index.html

Harnad, S. (2001) The Self-Archiving Initiative. Nature 410: 1024-1025
http://www.nature.com/nature/debates/e-access/Articles/harnad.html

If you wish to see what the RAE would look like if UK research output
were continually accessible online, and hence continuously assessable, see:
http://citebase.eprints.org/

To see how the RAE could help hasten this outcome (which is in any case
optimal and inevitable), see:
http://www.eprints.org/self-faq/#research-funders-do

We at Southampton are currently harvesting the RAE submissions data
and putting them in an Eprint Archive to provide a "demo" of the sorts of
possibilities an online, open-access research corpus opens up for
research visibility, accessibility, uptake, usage, citation, impact and
assessability.

Stevan Harnad


UK Research Assessment Exercise (RAE) review

2002-10-24 Thread Elizabeth Gadd
You may be aware that the UK Research Assessment Exercise (RAE) is under
review and there is a general invitation to contribute
(http://www.hefce.ac.uk/research/).

I would recommend anyone who is concerned about the impact of the RAE on
scholarly communication, and who sees a role for the RAE in encouraging the
self-archiving of universities research output to contribute.  Emails may be
sent to Vanessa Conte at rarev...@hefce.ac.uk. The closing date is 29
November 2002.

Best
Elizabeth

*
Elizabeth Gadd, Research Associate &
Editor,  Library and Information Research News
Department of Information Science
Loughborough University
Loughborough, Leics, LE11 3TU
Tel: +44 (0)1509 228053  Fax: +44 (0)1509 223072
Email: e.a.g...@lboro.ac.uk