Re: Call for a vote of nonconfidence in the moderator of the AmSci Forum

2008-10-08 Thread Paul Turnbull
I could not endorse the support of Stevan more strongly than has so far
been voiced.  He has made this list essential reading for anyone
interesting in the evolution of humanities disciplines into the realm of
networked communications.  He continues to have my support.

Professor Paul Turnbull
School of Arts
Griffith University
Nathan Q4111  Australia
+61 7 3735 4152
Mobile 0408441139


Re: Off-line Vote

2008-10-08 Thread Jan Szczepanski
Crisis everywhere these days not only on the market but now even within
the OA movement.

BioMed Central is bought by Springer and our beloved chairman is
criticized!

Barbara Kirsop's advise known from political and religious history  as
not the ultimate one even
if it is normal practice even todayl.

Steven Harnad should step back but of course not remove himself from the
list. We need Steven
even when he is deadly wrong.

Jan





barbara kirsop wrote:
 Events have overtaken the message I was about to send in the hope of
 ending this damaging exchange. I planned to say the following:
 
 'This exchange of messages is damaging to the List and to OA itself. I
 would like to suggest that those unhappy with any aspect of its operation
 merely remove themselves from the List. This is the normal practice.'
 
 A 'vote' is unnecessary and totally inappropriate.
 
 Barbara

--
De åsikter som framförs här är mina personliga
och inte ett uttryck för Göteborgs universitets-
biblioteks hållning

Opinions expressed here are my own and not
that of Göteborgs universitetsbibliotek



Jan Szczepanski
Förste bibliotekarie
Goteborgs universitetsbibliotek
Box 222
SE 405 30 Goteborg, SWEDEN
Tel: +46 31 7861164 Fax: +46 31 163797
E-mail: jan.szczepan...@ub.gu.se


Re: New ways of measuring research

2008-10-08 Thread Alma Swan
[ The following text is in the utf-8 character set. ]
[ Your display is set for the iso-8859-1 character set.  ]
[ Some characters may be displayed incorrectly. ]

Further to my previous message on this topic, I've already had some offline 
responses. So, some things I had already noted, plus some sent offline after my 
first request to this list (including some tongue-in-cheek ones) are:

Individualsÿÿ efforts can result in:
- Medals and prizes awarded to you
- Having a prize named after you (Nobelÿÿ)
- Having a building named after you (not uncommon)
- Having an institution named after you (Salkÿÿ)
- Having a 5 billion euro international project built on your work (Higgs)

But on a more mundane note, other methodologies I know of that are being 
developed for measuring research outcomes are:
- Ways to measure long-term outcomes of research in the area of health sciences 
(for example, leading to or incorporated into treatments or techniques in use 
20 years down the line) 
- Something akin to this for looking at long-term impact of research in the 
social sciences

Specific examples would be useful if anyone can point me towards any.

I am also appealing to provosts/rectors/VCs or those involved in the 
administration of research-based institutions/programmes to tell us what sort 
of measures you would like to have (offline if you wish). These need not only 
be for the rather specific purpose of research evaluation, but for any 
institutional purpose (such as new measures of ROI).

Alma Swan
Key Perspectives Ltd
Truro, UK

--- On Wed, 8/10/08, Subbiah Arunachalam subbia...@yahoo.com wrote:

 From: Subbiah Arunachalam subbia...@yahoo.com
 Subject: New ways of measuring research
 To: american-scientist-open-access-fo...@listserver.sigmaxi.org
 Date: Wednesday, 8 October, 2008, 1:01 AM
 Dear Members of the List:
 
 One of the key concerns of the Open Access movement is how
 will the transition from traditional toll-access publishing
 to scientific papers becoming freely accessible through open
 access channels (both OA repositories and OA journals)
 affect the way we evaluate science.. 
 
 In the days of print-only journals, ISI (now Thomson
 Reuters) came up with impact factors and other
 citation-based indicators. People like Gene Garfield and
 Henry Small of ISI and colleagues in neighbouring Drexel
 University in Philadelphia, Derek de Solla Price at Yale,
 Mike Moravcsik in Oregon, Fran Narin and Colleagues at CHI,
 Tibor Braun and the team in Hungary, Ton van Raan and his
 colleagues at CWTS, Loet Leydesdorff in Amsterdam, Ben
 Martin and John Irvine of Sussex, Leo Egghe in Belgium and a
 large number of others  too numerous to list here took
 advantage of the voluminous data put together by ISI to
 develop bibliometric indicators. Respected organizations
 such as the NSF in USA and the European Union's
 Directorate of Research (which brought out the European
 Report on ST INdicators similar to the NSF ST
 Indicators) recognised bibliometrics as a legitimate tool. A
 number of scientomtrics researchers built citation networks;
 David pendlebury at
  ISI started trying to predict Nobel Prize winners using
 ISI citation data. 
 
 When the transition from print to electronics started
 taking palce the scientometrics community came up with
 webometrics. When the transition from toll-access to open
 access started taking place we adopted webometrics to
 examine if open access improves visibility and citations.
 But we are basically using bibliometrics. 
 
 Now I hear from the Washington Research Evaluation Network
 that 
 
 ÿÿThe traditional tools of RD evaluation
 (bibliometrics, innovation indices, patent analysis,
 econometric modeling,
 etc.) are seriously flawed and promote seriously flawed
 analysesÿÿ and ÿÿBecause
 of the above, reports like the ÿÿGathering
 Stormÿÿ  provide seriously flawed analyses and misguided
 advice to
 science policy decision makers.ÿÿ
 Should we rethink our approach to evaluation of science?
 Arun
 [Subbiah Arunachalam]
 
 
 
 
 
 - Original Message 
 From: Alma Swan a.s...@talk21.com
 To:
 american-scientist-open-access-fo...@listserver.sigmaxi.org
 Sent: Wednesday, 8 October, 2008 2:36:44
 Subject: New ways of measuring research
 
 Barbara Kirsop said:
  'This exchange of messages is damaging to the List
 and
  to OA itself. I would like to suggest that those
 unhappy
  with any aspect of its operation 
  merely remove themselves from the List. This is the
 normal
  practice.' 
  
  A 'vote' is unnecessary and totally
 inappropriate.
 
 Exactly, Barabara. These attempts to undermine Stevan are
 entirely misplaced and exceedingly annoying. The nonsense
 about Stevan resigning, or changing his moderating style,
 should not continue any further. It's taking up
 bandwidth, boring everyone to blazes, and getting us
 precisely nowhere except generating bad blood. 
 
 Let those who don't like the way Stevan moderates this
 list resign as is the norm and, if 

Re: Call for a vote of nonconfidence in the moderator of the AmSci Forum

2008-10-08 Thread Bill Hooker
Such a vote seems unnecessary to me, but if one is to be (is being?) held
then I wish to make it clear that I vote to retain Stevan Harnad as moderator.


Re: Call for a vote of nonconfidence in the moderator of the AmSci Forum

2008-10-08 Thread David Dickson
Please count my vote for Stevan too. 

David Dickson (SciDev.Net)

-Original Message-
From: American Scientist Open Access Forum
[mailto:american-scientist-open-access-fo...@listserver.sigmaxi.org] On
Behalf Of Bill Hooker
Sent: 08 October 2008 05:32
To: american-scientist-open-access-fo...@listserver.sigmaxi.org
Subject: Re: Call for a vote of nonconfidence in the moderator of the
AmSci Forum

Such a vote seems unnecessary to me, but if one is to be (is being?)
held
then I wish to make it clear that I vote to retain Stevan Harnad as
moderator.

__
This email has been scanned by the MessageLabs Email Security System.
For more information please visit http://www.messagelabs.com/email 
__

__
This email has been scanned by the MessageLabs Email Security System.
For more information please visit http://www.messagelabs.com/email 
__


Re: New ways of measuring research

2008-10-08 Thread Valdez, Bill
Hello everyone:

I was the person who asserted during the most recent Washington Research
Evaluation Network (WREN) meeting that: The traditional tools of RD
evaluation (bibliometrics, innovation indices, patent analysis,
econometric modeling, etc.) are seriously flawed and promote seriously
flawed analyses and Because of the above, reports like the 'Gathering
Storm'  provide seriously flawed analyses and misguided advice to
science policy decision makers.   I will admit that this was meant to
be provocative, but it was also meant to be the views of a consumer of
evaluation of science policy and research.  

Perhaps I could explain my reasoning and then folks could jump in.

First, the primary reason that I believe bibliometrics, innovation
indices, patent analysis and econometric modeling are flawed is that
they rely upon the counting of things (paper, money, people, etc.)
without understanding the underlying motivations of the actors within
the scientific ecosystem.  This is a conversation I have had with Fran
Narin, Diana Hicks, Caroline Wagner and a host of others and comes down
to a basic question:  what is the motivates scientists to collaborate?
If we cannot come up with a set of business decision rules for the
scientific community, then we can never understand optimal levels of
funding of RD for nations, the reasons why institutions collaborate, or
a host of other questions that underpin the scientific process and
explain the core value proposition behind the scientific endeavor.

Second, what science policy makers want is a set of decision support
tools that supplement the existing gold standard (expert judgment) and
provide options for the future.  When we get down to the basics, policy
makers need to understand the benefits and effectiveness of their
investment decisions in RD.  Currently, policy makers rely on big
committee reviews, peer review, and their own best judgment to make
those decisions.  The current set of tools available don't provide
policy makers with rigorous answers to the benefits/effectiveness
questions (see my first point) and they are too difficult to use and/or
inexplicable to the normal policy maker.  The result is the laundry list
of metrics or indicators that are contained in the Gathering Storm
or any of the innovation indices that I have seen to date.

Finally, I don't think we know enough about the functioning of the
innovation system to begin making judgments about which
metrics/indicators are reliable enough to provide guidance to policy
makers.  I believe that we must move to an ecosystem model of innovation
and that if you do that, then non-obvious indicators (relative
competitiveness/openness of the system, embedded infrastructure, etc.)
become much more important than the traditional metrics used by NSF,
OECD, EU and others.  In addition, the decision support tools will
gravitate away from the static (econometric modeling,
patent/bibliometric citations) and toward the dynamic (systems modeling,
visual analytics).

These are the kinds of issues that my colleague, Julia Lane, and I have
been discussing with other U.S. federal government colleagues as part of
the Science of Science Policy Interagency Task Group (SoSP ITG) that was
created by the President's Science Advisor, Dr. John Marburger two years
ago.  The SoSP ITG has created a research Roadmap that would deal with
the three issues (and many more) discussed above as a way to push the
envelope in the emerging field of science policy research that Julia
supports at NSF.  The SoSP ITG is also hosting a major workshop in
December in Washington, with WREN, that will discuss the Roadmap and its
possible implementation.

Regards,

Bill Valdez
U.S. Department of Energy

-Original Message-
From: Subbiah Arunachalam [mailto:subbia...@yahoo.com] 
Sent: Tuesday, October 07, 2008 8:01 PM
To: American Scientist Open Access Forum
Subject: New ways of measuring research

Dear Members of the List:

One of the key concerns of the Open Access movement is how will the
transition from traditional toll-access publishing to scientific papers
becoming freely accessible through open access channels (both OA
repositories and OA journals) affect the way we evaluate science.. 

In the days of print-only journals, ISI (now Thomson Reuters) came up
with impact factors and other citation-based indicators. People like
Gene Garfield and Henry Small of ISI and colleagues in neighbouring
Drexel University in Philadelphia, Derek de Solla Price at Yale, Mike
Moravcsik in Oregon, Fran Narin and Colleagues at CHI, Tibor Braun and
the team in Hungary, Ton van Raan and his colleagues at CWTS, Loet
Leydesdorff in Amsterdam, Ben Martin and John Irvine of Sussex, Leo
Egghe in Belgium and a large number of others  too numerous to list here
took advantage of the voluminous data put together by ISI to develop
bibliometric indicators. Respected organizations such as the NSF in USA
and the European Union's Directorate of Research (which brought 

Re: Are Online and Free Online Access Broadening or Narrowing Research?

2008-10-08 Thread Stevan Harnad
The decline in the concentration of citations, 1900-2007

Vincent Lariviere, Yves Gingras, Eric Archambault

http://arxiv.org/abs/0809.5250
(Deposited on 30 Sep 2008)

This paper challenges recent research (Evans, 2008) reporting that the
concentration of cited scientific literature increases with the online
availability of articles and journals. Using Thomson Reuters' Web of
Science, the present paper analyses changes in the concentration of
citations received (two- and five-year citation windows) by papers
published between 1900 and 2005. Three measures of concentration are
used: the percentage of papers that received at least one citation
(cited papers); the percentage of papers needed to account for 20, 50
and 80 percent of the citations; and, the Herfindahl-Hirschman index.
These measures are used for four broad disciplines: natural sciences
and engineering, medical fields, social sciences, and the humanities.
All these measures converge and show that, contrary to what was
reported by Evans, the dispersion of citations is actually increasing.


Re: New ways of measuring research

2008-10-08 Thread Stevan Harnad
On Wed, Oct 8, 2008 at 7:57 AM, Valdez, Bill
bill.val...@science.doe.gov wrote:

 the primary reason that I believe bibliometrics, innovation
 indices, patent analysis and econometric modeling are flawed is that
 they rely upon the counting of things (paper, money, people, etc.)
 without understanding the underlying motivations of the actors within
 the scientific ecosystem.

There are two ways to evaluate:

Subjectively (expert judgement, peer review, opinion polls)
or
Objectively: counting things

The same is true of motives: you can assess them subjectively or
objectively. If objectively, you have to count things.

That's metrics.

Philosophers say Show me someone who wishes to discard metaphysics,
and I'll show you a metaphysician with a rival (metaphysical) system.

The metric equivalent is Show me someone who wishes to discard
metrics (counting things), and I'll show you a metrician with a rival
(metric) system.

Objective metrics, however, must be *validated*, and that usually
begins by initializing their weights based on their correlation with
existing (already validated, or face-valid) metrics and/or peer review
(expert judgment).

Note also that there are a-priori evaluations (research funding
proposals, research findings submittedf or publication) and
a-posteriori evaluations (research performance assessment).

 what,,, motivates scientists to collaborate?

You can ask them (subjective), or you can count things
(co-authorships, co-citations, etc.) to infer what factors underlie
collaboration (objective).

 Second, what science policy makers want is a set of decision support
 tools that supplement the existing gold standard (expert judgment) and
 provide options for the future.

New metrics need to be validated against existing, already validated
(or face-valid) metrics which in turn have to be validated against the
gold standard (expert judgment. Once shown to be reliable and valid,
metrics can then predict on their own, especially jointly, with
suitable weights:

The UK RAE 2008 offers an ideal opportunity to validate a wide
spectrum of old and new metrics, jointly, field by field, against
expert judgment:

Harnad, S. (2007) Open Access Scientometrics and the UK Research
Assessment Exercise. In Proceedings of 11th Annual Meeting of the
International Society for Scientometrics and Informetrics 11(1), pp.
27-33, Madrid, Spain. Torres-Salinas, D. and Moed, H. F., Eds.
http://eprints.ecs.soton.ac.uk/13804/

Sample of candidate OA-era metrics:

Citations (C)
CiteRank
Co-citations
Downloads (D)
C/D Correlations
Hub/Authority index
Chronometrics: Latency/Longevity
Endogamy/Exogamy
Book citation index
Research funding
Students
Prizes
h-index
Co-authorships
Number of articles
Number of publishing years
Semiometrics (latent semantic indexing, text overlap, etc.)

 policy makers need to understand the benefits and effectiveness of their
 investment decisions in RD.  Currently, policy makers rely on big
 committee reviews, peer review, and their own best judgment to make
 those decisions.  The current set of tools available don't provide
 policy makers with rigorous answers to the benefits/effectiveness
 questions... and they are too difficult to use and/or
 inexplicable to the normal policy maker.  The result is the laundry list
 of metrics or indicators that are contained in the Gathering Storm
 or any of the innovation indices that I have seen to date.

The difference between unvalidated and validated metrics is the
difference between night and day.

The role of expert judgment will obviously remain primary in the case
of a-priori evaluations (specific research proposals and submissions
for publication) and a-posteriori evaluations (research performance
evaluation, impact studies)

 Finally, I don't think we know enough about the functioning of the
 innovation system to begin making judgments about which
 metrics/indicators are reliable enough to provide guidance to policy
 makers.  I believe that we must move to an ecosystem model of innovation
 and that if you do that, then non-obvious indicators (relative
 competitiveness/openness of the system, embedded infrastructure, etc.)
 become much more important than the traditional metrics used by NSF,
 OECD, EU and others.  In addition, the decision support tools will
 gravitate away from the static (econometric modeling,
 patent/bibliometric citations) and toward the dynamic (systems modeling,
 visual analytics).

I'm not sure what all these measures are, but assuming they are
countale metrics, they all need prior validation against validated or
face-valid criteria, fields by field, and preferably a large battery
of candidate metrics, validated jointly, initializing the weights of
each.

OA will help provide us with a rich new spectrum of candidate metrics
and an open means of monitoring, validating, and fine-tuning them.

Stevan Harnad


Re: Call for a vote of nonconfidence in the moderator of the AmSci Forum

2008-10-08 Thread sely maria de souza costa
[ The following text is in the utf-8 character set. ]
[ Your display is set for the iso-8859-1 character set.  ]
[ Some characters may be displayed incorrectly. ]

I have already mention my unconditional support to Stevan in response to 
another message. Just in case, am doing it again!!

Regards to all Stevan supporters!

Sely
- Mensagem original -
De: David Dickson david.dick...@scidev.net
Para: american-scientist-open-access-fo...@listserver.sigmaxi.org
Enviadas: Quarta-feira, 8 de Outubro de 2008 06h10min31s (GMT-0300) 
Auto-Detected
Assunto: Re: Call for a vote of nonconfidence in the moderator of the AmSci 
Forum

Please count my vote for Stevan too.

David Dickson (SciDev.Net)

-Original Message-
From: American Scientist Open Access Forum
[mailto:american-scientist-open-access-fo...@listserver.sigmaxi.org] On
Behalf Of Bill Hooker
Sent: 08 October 2008 05:32
To: american-scientist-open-access-fo...@listserver.sigmaxi.org
Subject: Re: Call for a vote of nonconfidence in the moderator of the
AmSci Forum

Such a vote seems unnecessary to me, but if one is to be (is being?)
held
then I wish to make it clear that I vote to retain Stevan Harnad as
moderator.

__
This email has been scanned by the MessageLabs Email Security System.
For more information please visit http://www.messagelabs.com/email
__

__
This email has been scanned by the MessageLabs Email Security System.
For more information please visit http://www.messagelabs.com/email
__