Dear colleagues,

we are pleased to announce RepSys, a workshop on Reproducibility and Replication that will be held in ACM RecSys 2013. This workshop aims to provide an opportunity to discuss about the limitations and challenges of experimental reproducibility and replication.

Hope you find it interesting.

Regards,
Alejandro

[Apologies if you receive this more than once]


===============================================================================

                            ACM RecSys Workshop on

Reproducibility and Replication in Recommender Systems Evaluation - RepSys2013

             7th ACM Recommender Systems Conference (RecSys 2013)

                    Hong Kong, China, 12 or 16 October 2013

                          http://repsys.project.cwi.nl

===============================================================================


*  Submission deadline: 22 July 2013 *


== Scope ==


Experiment replication and reproduction are key requirements for empirical research methodology, and an important open issue in the field of Recommender Systems. When an experiment is repeated by a different researcher and exactly the same result is obtained, we can say the experiment has been replicated. When the results are not exactly the same but the conclusions are compatible with the prior ones, we have a reproduction of the experiment. Reproducibility and replication involve recommendation algorithm implementations, experimental protocols, and evaluation metrics. While the problem of reproducibility and replication has been recognized in the Recommender Systems community, the need for a clear solution remains largely unmet, which motivates the present
workshop.


== Topics ==


We invite the submission of papers reporting original research, studies, advances, experiences, or work in progress in the scope of reproducibility and replication in Recommender Systems evaluation. Papers explicitly dealing with replication of previously published experimental conditions/algorithms/metrics and the resulting analysis are encouraged. In particular, we seek discussions on the difficulties the authors may find in this process, along with their
limitations or successes on reproducing the original results.


The topics the workshop seeks to address include –though need not be limited to–
the following:

* Limitations and challenges of experimental reproducibility and replication

* Reproducible experimental design

* Replicability of algorithms

* Standardization of metrics: definition and computation protocols

* Evaluation software: frameworks, utilities, services

* Reproducibility in user-centric studies

* Datasets and benchmarks

* Recommender software reuse

* Replication of already published work

* Reproducibility within and across domains and organizations

* Reproducibility and replication guidelines


== Submission ==

Two submission types are accepted: long papers of up to 8 pages, and short papers up to 4 pages. The papers will be evaluated for their originality, contribution significance, soundness, clarity, and overall quality. The interest of contributions will be assessed in terms of technical and scientific findings, contribution to the knowledge and understanding of the problem, methodological advancements, or applicative value. Specific contributions focusing on repeatability and reproducibility in terms of algorithm implementations,
evaluation frameworks and/or practice will also be welcome and valued.


All submissions shall adhere to the standard ACM SIG proceedings format:
http://www.acm.org/sigs/publications/proceedings-templates.


Submissions shall be sent as a pdf file through the online submission system now
open at: https://www.easychair.org/conferences/?conf=repsys2013.


== Important dates ==


* Paper submission deadline: 22 July

* Notification: 16 August

* Camera-ready version due: 30 August


== Organizers ==


* Alejandro Bellogín, Centrum Wiskunde & Informatica, The Netherlands

* Pablo Castells, Universidad Autónoma de Madrid, Spain

* Alan Said, Centrum Wiskunde & Informatica, The Netherlands

* Domonkos Tikk, Gravity R&D, Hungary


== Programme Committee ==


* Xavier Amatriain, Netflix, USA

* Linas Baltrunas, Telefonica Research, Spain

* Marcel Blattner, University of Applied Sciences, Switzerland

* Iván Cantador, Universidad Autónoma de Madrid, Spain

* Ed Chi, Google, USA

* Arjen de Vries, Centrum Wiskunde & Informatica, Netherlands

* Juan Manuel Fernández, Universidad de Granada, Spain

* Zeno Gantner, Nokia, Germany

* Pankaj Gupta, Twitter, USA

* Andreas Hotho, University of Würburg, Germany

* Juan Huete, Universidad de Granada, Spain

* Kris Jack, Mendeley, England

* Dietmar Jannach, University of Dortmund, Germany

* Jaap Kamps, University of Amsterdam, Netherlands

* Alexandros Karatzoglou, TID, Spain

* Bart Knijnenburg, University of California, Irvine, USA

* Ido Guy, Google, Israel

* Jérôme Picault, Bell Labs, Alcatel-Lucent, France

* Till Plumbaum, TU Berlin, Germany

* Daniele Quercia, Yahoo!, Spain

* Filip Radlinski, Microsoft, Canada

* Yue Shi, TU-Delft, The Netherlands

* Fabrizio Silvestri, Consiglio Nazionale delle Ricerche, Italy

* Harald Steck, Netflix, USA

* David Vallet, NICTA, Australia

* Jun Wang, University College London, UK

* Xiaoxue Zhao, University College London, UK

Reply via email to