hi Eric,

It is good to see a discussion of this topic. Some preliminary thoughts:

The journal-level peer review process involved in the SSHRC Aid to Scholarly 
Journals is a type of model I suggest others look at. The primary questions 
have nothing to do with metrics, but rather are qualitative, whether a high 
standard of review is met. There likely are similar models elsewhere - I am 
sure that one needs to fit within the academic community to be part of Scielo, 
for example. Research to gather information on what people are doing would be 
helpful. Regional or discipline-based approaches would make sense.

I question the need for a universal list, and for metrics-based approaches. 
Whether a contribution to our knowledge is sound and important and whether it 
has an immediate short-term impact are two completely separate questions. My 
perspective is that work is needed on the impact of metrics-based approaches.

The important questions for scholars in any discipline should be "what to read" 
and "where to publish", not any metric, traditional or alternative. I think we 
scholars ourselves should take responsibility for the lists and recommending 
journals for indexing rather than leaving such questions to the commercial 
sector.

Heather

On Oct 3, 2015, at 11:25 AM, "Éric Archambault" 
<eric.archamba...@science-metrix.com<mailto:eric.archamba...@science-metrix.com>>
 wrote:

Hi List
Hi list

My previous efforts rapidly went off-topic, so I’m making a second effort to 
reload the questions to the list with the hope of receiving more input on this 
important topic.

Back to our still largely unaddressed problem, I am re-inviting people to 
contribute ideas, focussing away from individuals.

What is the best way to deal with the question of assessing the practices of 
publishers and journals (for subscription only, hybrid and open access 
journals)?
Should it be done through a negative list listing journals/publishers with 
deceptive practices?
Should it be done through a positive list of best-practice journals?
Should it be done through an exhaustive list comprising all scholarly 
quality-reviewed journals (peer-review is somewhat restrictive as different 
fields have different norms).

Personally, I think the latter is the way to go. Firstly, there is currently no 
exhaustive list of reviewed scholarly journals. Though we sent astronauts to 
the moon close to half a century ago, we are still largely navigating blind on 
evidence-based decision-making in science. No one can confidently say how many 
active journals there are the world over. We need an exhaustive list. Secondly, 
I think journals and publishers should not be examined in a dichotomous manner; 
we need several criteria to assess their practice and the quality of what is 
being published.

What metrics do we need to assess journal quality, and more specifically`:
-What metrics of scholarly impact should be used (that is, within the scholarly 
community impact – typically the proprietary Thomson Journal Impact Factor has 
been the most widely used even though it was designed at the same time as we 
sent astronauts to the moon and has pretty much never been updated since -- 
full disclosure: Science-Metrix is a client of Thomson Reuters’s Web of Science 
raw data; competing indicators include Elsevier’s SNIP and SCIMAGO’s SJR, both 
computed with Scopus data and available for free for a few years but with 
comparatively limited uptake -- full disclosure: Science-Metrix is a client of 
Elsevier’s Scopus raw data; note also that bibliometrics practices such as 
CWTS, iFQ and Science-Metrix compute their own version of these journal impact 
indicators using WoS and/or Scopus data)
-What metrics of outreach should be used (e.g. use by the public, government, 
enterprises – typically these are covered by so-called “alternative metrics”)?
-What metrics of peer-review and quality-assessment effectiveness should be 
used?
-What other metrics would be relevant?

Perhaps before addressing the above questions we should examine these two 
questions:

Why do we need such a list?
What are the use cases for such a list?

The following “how” questions are very important too:

-How should such a list be produced?
-How will it be sustainable?

Finally the “who” question:
Who should be contributing the list?
   -A Wikipedia-sort of crowdsourced list?
   -Should only experts be allowed to contribute to the list? Librarians? 
Scholars? Anyone?
   -A properly funded not-for-profit entity?
   -Corporate entities vying for a large market share?

Thank you for your input,

Éric




Eric Archambault, Ph.D.
President and CEO | Président-directeur général
Science-Metrix & 1science
1335, Mont-Royal E
Montréal, QC  H2J 1Y6 - Canada

E-mail: 
eric.archamba...@science-metrix.com<mailto:eric.archamba...@science-metrix.com>
Web:    science-metrix.com<http://www.science-metrix.com/>
             1science.com<http://1science.com>











_______________________________________________
GOAL mailing list
GOAL@eprints.org<mailto:GOAL@eprints.org>
http://mailman.ecs.soton.ac.uk/mailman/listinfo/goal
_______________________________________________
GOAL mailing list
GOAL@eprints.org
http://mailman.ecs.soton.ac.uk/mailman/listinfo/goal

Reply via email to