Hi,

I agree with you. 
When I hear about "research" I tend to think about Life Sciences research (my 
bias). In this context, I see a lot of advocacy for reproducibility of 
research. But when I see what people do, for most of them, data elaboration is 
only one part of the process. So I was wondering, in this context, how relevant 
"publication" is on the reproducibility of results.
Probably in proportion not much.

Otherwise you are right. In a more computational/engineering oriented areas, 
where reproducibility only depends on information... information should be 
there.

best,
Andrea

Il giorno 29/lug/2014, alle ore 02:51, Sarven Capadisli <i...@csarven.ca> ha 
scritto:

> On 2014-07-29 00:45, Andrea Splendiani wrote:
>> while I agree with you all, I was thinking: is the lack of reproducibility 
>> an issue due to the way results are represented ?
>> Apart for some fields (e.g.: bioinformatics), materials, samples, experience 
>> are probably more relevant and much harder to reproduce.
> 
> I think that depends on who we ask and how much they care about 
> reproducibility.
> 
> *IMHO*, the SW/LD research scene is not exactly hard-science. It leans more 
> on engineering and development than following the pure scientific method. 
> Majority of the research that's coming out of this area focuses on showing 
> positive and useful results, and that appears to materialize in some ways 
> like:
> 
> * My code can beat up your code.
> * We have something that is ground breaking.
> * We have some positive results, and came up with a research problem.
> 
> How often do you come across negative results in the proceedings i.e., some 
> *exploration* which ended up at a dead end?
> 
> It is trivial to find the evaluation section of a paper often replaced with 
> benchmarks. Kjetil, pointed at this issue eloquently at ISWC 2013: 
> http://folk.uio.no/kjekje/2013/iswc.pdf . Emphasizing on the need to do 
> careful design of experiments where required.
> 
> In other cases, one practically needs to run after the authors 1) to get a 
> copy of the original paper, 2) the tooling or whatever they built or 3) the 
> data that they used or produced. It is generally assumed that if some text is 
> in a PDF, and gets a go ahead from a few reviewers, it passes as science. 
> Paper? Code? Data? Environment? Send me an email please.
> 
> I am generalizing the situation of course. So, please put your pitchforks 
> down. There is a lot of great work, and solid science conducted by the SW/LD 
> community. But lets not keep our eyes off Signal:Noise.
> 
> So, yes, making efforts toward reproducibility is important to redeem 
> ourselves. If you think that reproducibility in some other fields is more 
> relevant and harder, well, then, I think we should be able to manage things 
> on our end, don't you think?
> 
> The benefit of having the foundations for reproducibility via LD is that, we 
> make it possible to query our research process and output, and introduce the 
> possibility to compare atomic parts of the experiments, or even detect and 
> fix issues.
> 
> If we can't handle the technicality that goes into creating "linked 
> research", how can we expect the rest of world to get on board? And we are 
> not dealing with a technical problem here. It is blind obedience and 
> laziness. There is absolutely nothing stopping us from playing along with the 
> archaic industry models and publishing methods temporarily (for a number of 
> good and valid reasons), if and only if, we first take care of ourselves and 
> have complete control over things. Publish on your end, pass a stupid fixed 
> copy to the conference/publisher. Then see how quickly the research "paper" 
> landscape changes.
> 
> As I've stated at the beginning, it all depends on who we ask and how much 
> they care. Do we? If so, what are we going to do about it?
> 
> -Sarven
> http://csarven.ca/#i
> 


Reply via email to