On 2014-07-29 23:35, Ivan Shmakov wrote:
Sarven Capadisli <i...@csarven.ca> writes:
On 2014-07-29 09:43, Andrea Perego wrote:

  >> You might consider including in your call an explicit reference to
  >> nanopublications [1] as an example of how to address point (5).

  >> About source code, there's a project, SciForge [1], working on the
  >> idea of making scientific software citable.

  >> My two cents...

  >> [1] http://nanopub.org/
  >> [2] 
http://www.gfz-potsdam.de/en/research/organizational-units/technology-transfer-centres/cegit/projects/sciforge/

  > Thanks for the heads-up, Andrea.  The article on my site has an open
  > comment system, which is intended to have an open discussion or have
  > suggestions for the others (like the ones you've proposed).  Not that
  > I'm opposed to continuing the discussion here, but you are welcome to
  > contribute there so that the next person that comes along can get a
  > hold of that information.

        Not that I have much to say on the subject itself, but I’d like
        to note that, to my mind, a major issue with “on-site” comments
        is that there are rarely any standard way to “mirror” them
        somewhere else.

        Alas, Web sites come and go (and the Internet Archive cannot
        always be relied upon), while mailing list messages survive in
        the subscribers’ email archives, – and at times, could be
        downloaded via NNTP from Gmane just as well.

[…]


Lets contrast "on site comments" with:

1. Blind reviews.

2. Reviews that do not even get to see the light of day.

So, I see feedback of all sorts (including "reviews") with people's name attached to them, where anyone can read and foster discussion is an improvement over the current state of things.

I would even favour reviews held out in the open in a public mailing list.

But, what we see now is no reviewer names and rejections (which is by far the greatest portion of submitted research). The usually leads authors to hold back on their work for further improvements (even if it was good work to begin with but happened to not meet some artificial "cut off" point) or retries at other venues. I have a hard time thinking that rejected reviews from 1-3 anonymous reviewers is better than putting the work out there to get a broader sense on the quality of the work. Those same reviews can still be conducted as "on site comments" with those same reviewers, in addition to everyone else that's interested in the work.

Any way, there are venues that do this already out in the open e.g., Semantic Web Journal, and I think that's great.

Archiving reviews/comments is an important, but orthogonal issue here.

-Sarven
http://csarven.ca/#i

Attachment: smime.p7s
Description: S/MIME Cryptographic Signature

Reply via email to