Re: AMERICAN-SCIENTIST-OPEN-ACCESS-FORUM Digest - 8 Aug 2008 to 9 Aug 2008 (#2008-153)

2008-08-11 Thread Gu�don Jean-Claude
I had always assumed that this list dealt with policy issues, not styles of 
expression. 

Just a remark done with a smile (and no cynicism). And indeed, as I have had 
the opportunity to express it a little while ago, "le style, c'est l'homme" 
(Buffon).

As for creating better indicators, I am not involved in this kind of work, but 
I am all for it, of course. Who can be against motherhood and apple pie (and 
Yves Gingras)?

Jean-Claude Guédon


-Original Message-
From: American Scientist Open Access Forum on behalf of Yves Gingras
Sent: Sun 8/10/2008 3:41 PM
To: american-scientist-open-access-fo...@listserver.sigmaxi.org
Subject:  Re: AMERICAN-SCIENTIST-OPEN-ACCESS-FORUM Digest - 8 Aug 2008 to 9 
 Aug 2008 (#2008-153)
 

It could be expected that my colleague Jean-Claude Guedon would offer
himself to teach us in his usual cynical, smiling, tongue-in-cheek manner
the simplistic and obvious constructivist explanation about what ³sticks² in
society; as if we, poor naïve, did not know this basic fact that, in
society, all representations are born and die in struggles...

But has he says: this is  ³somewhat irrelevant² to my real point which is:
by fighting these absurd measures which can in fact generate stupid
university policies we can contribute to better ranking and policies. For
following his logic  if people would choose their universities on the basis
of Feng Shui or even Chineese horsocope (as some may do already) then we
would have to live with it if for some contingent reason it happened to
³stick² and be used by students and politicians... This remembers me <
believe me: it is true!< a deputy minister of SCIENCE who choose his
collaborators based on their astral signs!...

But one can see things differently and recall that it took years for
Canadian universites to pull out of Macelan¹s rankings, but they did. That
was a minimum logical step to do even though to make money Maclean will of
course continue to publish it on its own; but at least universities will not
use their own money to create that misconceived ranking and in this way give
it credibility. So, the same with Shnaghai and Web-rankings: a systematic
destruction by people who work in that field (and I may recall that as
Scientific Director of the Observatoire des sciences et des technologies
(OST), I have been working on research evaluation for more than 10 years
with many Ministries and Universities and Research centers and know how to
construct a well defined indicator of research impact based on publications
and citations. I also know how to recognize false indicators by analyzing
their properties. Of course they can be debated and interpreted < as my
colleague love to do constatnly< but they are like inflation index or
unemployment index: based on controlled and coherent data so that even
interpretations are constrained.

So, beyond (and after) the basic constructivist soiology of my colleague one
can go a step further, which I took as implicit in my text: use every
opportunity to recall to managers and politicians that 1) Shanghai
indicators and the bizarre one on Web-visibility are ill-conceived and 2) we
can create much better indicators 3) using bad indicators can lead to
dangerous policies like bad medical diagnostic may lead to giving the wrong
pill... 

As someone who has a certain expertise in indicators, I prefer to try to
convince people to use good instead of bad indicatgors and I keep my
sociology 101 for my classes. And of course ³good² and ³bad² are also
socially constructed as my colleague will urge to add...  Bu as a social
agent, I fight (naively?) using my expertise (as intellectuals should do) to
make sure the conceptual houses that are built in our society are not based
on ill-conceived plans. For in analalogy with real houses, badly constructed
ones generate strcutural problems and eventually fall; sometimes on real
people...  

But I stop here and I will NOT do like my good friend Stevan: loose much
more time in dilettante and unending exchanges with our colleague about
obvious facts that lead to more talk and less action. For one can be content
with observing the world from above, with the smile of those in the know, or
one can try to make it less absurd, even if that means going against the
dominant wind.


Yves Gingras


De : Guédon Jean-Claude 
Date : Sat, 9 Aug 2008 08:03:20 -0400
Objet : Re: University ranking

The criticism of the university rankings in terms of measuring "what" is
quite correct. However, it is also somewhat irrelevant. What is important in
the end, whether we like it or not (and I certainly do not like it any more
than the previous commentators) is that it creates a benchmark that sticks,
so to speak, and is used. If there ever was a good example of social
construction of "reality", this is it. What is at stake here is not quality
measurement; rather, it is "logo" building for a globalized knowledge
economy. If administrators, the press and gove

Re: Repositories using some form of automatically generated metadata

2008-08-11 Thread Lee Giles
CiteSeerX uses nothing but automated metadata extraction. You can try it
out at

http://citeseerx.ist.psu.edu

Best

Lee Giles

Mahendra Mahey wrote:
> I am trying to find the extent to which repositories are using some form
> of automatically generated metadata.
> 
> This could be in the form of automatically inserting the depositors
> details into the author field as a suggestion (if they are indeed the
> author - as sometimes they are not),  a pick list appearing on a deposit
> form from an internal database, to the use of automatic classification
> systems that populate fields such as keywords, subject, title etc after
> an analysis of the item deposited.
> 
> *Questions*
> 
> If your repository is using auto metadata...
> 
> What kind of auto metadata is being used and how? Has this been formally
> documented? Is this available? If not, could you provide me with a
> screnshot?
> 
> If you are not using it, I am assuming that you would like to use some
> form of it, as long as it is reliable?  If any of you have objections or
> bad experiences to using auto generated metadata, please let me know why.
> 
> Could you please *reply to me off list*?  I will of course provide the
> list with a summary of my findings.
> 
> Thank you
> 
> --
> ---
> Mr Mahendra Mahey
> 
> Repositories Research Officer
> 
> UKOLN,
> University of Bath,
> Bath,
> BA2 7AY
> Tel: ++44 (0) 1225 384594
> Fax: ++44 (0) 1225 386256
> email: m.ma...@ukoln.ac.uk
> skypeID: mr_mahendra_mahey
> Mobile: ++44 (0) 7896300820
> ---


The Use And Misuse Of Bibliometric Indices In Evaluating Scholarly Performance

2008-08-11 Thread Stevan Harnad
[ The following text is in the "WINDOWS-1252" character set. ]
[ Your display is set for the "iso-8859-1" character set.  ]
[ Some characters may be displayed incorrectly. ]

Ethics In Science And Environmental Politics (ESEP) 

ESEP Theme Section: The Use And Misuse Of Bibliometric Indices In
Evaluating Scholarly Performance + accompanying Discussion Forum

Editors: Howard I. Browman, Konstantinos I. Stergiou
Quantifying the relative performance of
individual scholars, groups of scholars,
departments, institutions,
provinces/states/regions and countries has
become an integral part of decision-making
over research policy, funding allocations,
awarding of grants, faculty hirings, and
claims for promotion and tenure. Bibliometric
indices (based mainly upon citation counts),
such as the h-index and the journal impact
factor, are heavily relied upon in such
assessments. There is a growing consensus,
and a deep concern, that these indices ?
more-and-more often used as a replacement for
the informed judgement of peers ? are
misunderstood and are, therefore, often
misinterpreted and misused. The articles in
this ESEP Theme Section present a range of
perspectives on these issues. Alternative
approaches, tools and metrics that will
hopefully lead to a more balanced role for
these instruments are presented.

  Browman HI, Stergiou KI INTRODUCTION: Factors and indices
  are one thing, deciding who is scholarly, why they are
  scholarly, and the relative value of their scholarship is
  something else entirely 
  ESEP 8:1-3 

  Campbell P Escape from the impact factor 
  ESEP 8:5-7 

  Lawrence PA Lost in publication: how measurement harms
  science 
  ESEP 8:9-11 

  Todd PA, Ladle RJ Hidden dangers of a 'citation culture' 
  ESEP 8:13-16 

  Taylor M, Perakakis P, Trachana V The siege of science 
  ESEP 8:17-40 

  Cheung WWL The economics of post-doc publishing 
  ESEP 8:41-44 

  Tsikliras AC Chasing after the high impact 
  ESEP 8:45-47 

  Zitt M, Bassecoulard E Challenges for scientometric
  indicators: data demining, knowledge flows measurements
  and diversity issues 
  ESEP 8:49-60 

  Harzing AWK, van der Wal R Google Scholar as a new source
  for citation analysis 
  ESEP 8:61-73 

  Pauly D, Stergiou KI Re-interpretation of 'influence
  weight' as a citation-based Index of New Knowledge (INK) 
  ESEP 8:75-78 

  Giske J Benefitting from bibliometry 
  ESEP 8:79-81 

  Butler L Using a balanced approach to bibliometrics:
  quantitative performance measures in the Australian
  Research Quality Framework 
  ESEP 8:83-92 
  Erratum 

  Bornmann L, Mutz R, Neuhaus C, Daniel HD Citation counts
  for research evaluation: standards of good practice for
  analyzing bibliometric data and presenting and
  interpreting results 
  ESEP 8:93-102 

  Harnad S Validating research performance metrics against
  peer rankings 
  ESEP 8:103-107