MODEL CHECKING CONTEST 2018 - (2/2) - CALL FOR TOOLS

GOALS

   The Model Checking Contest (MCC) is a yearly event that assesses existing
   verification tools for concurrent systems on a set of models (i.e., 
   benchmarks) proposed by the scientific community. All tools are compared on 
   the same benchmarks and using the same computing platform, so that a fair 
   comparison can be made, contrary to most scientific publications, in which
   different benchmarks are executed on different platforms.

   Another goal of the Model Checking Contest is to infer conclusions about
   the respective efficiency of verification techniques for Petri nets
   (decision diagrams, partial orders, symmetries, etc.) depending on the
   particular characteristics of models under analysis. Through the feedback
   on tools efficiency, we aim at identifying which techniques can best tackle
   a given class of models.

   Finally, the Model Checking Contest seeks to be a friendly place where
   developers meet, exchange, and collaborate to enhance their verification
   tools.

   The Model Checking Contest is organized in three steps: 
   - the Call for Models (http://mcc.lip6.fr/cfm.php),
   - the present Call for Tools (http://mcc.lip6.fr/cft.php),
   - and the Contest itself.

CALL FOR TOOLS

   For the 2018 edition, we kindly ask the developers of verification tools
   for concurrent systems to participate in the MCC competition. Each tool
   will be assessed on both the accumulated collection of MCC models (these
   are the "known" models, see http://mcc.lip6.fr/models.php) and on the new 
   models selected during the 2018 edition (these are the "surprise" models,
   see http://mcc.lip6.fr/cfm.php).

   The benchmarks on which tools will be assessed, are colored Petri nets 
   and/or P/T nets. Some P/T nets are provided with additional information 
   giving a hierarchical decomposition into sequential machines (these models
   are called Nested-Units Petri nets - see http://mcc.lip6.fr/nupn.php for
   more information): tools may wish to exploit this information to increase
   performance and scalability.

   Each tool may compete in one or more categories of problems, such as
   reachability analysis, evaluation of CTL formulas, of LTL formulas, etc.

   Tools have to be submitted in binary-code form. Each submitted tool will
   be run by the MCC organizers in a virtual machine (typically configured 
   with 4 cores, 4 Gbytes RAM per core, and a time confinement of 60 minutes 
   per run, i.e., per instance of a model). Last year, more than 1500 days 
   of CPU time have been invested in the MCC competition. The MCC relies on
   BenchKit (https://github.com/fkordon/BenchKit), a dedicated execution
   environment for monitoring the execution of processes and gathering of data.

   By submitting a tool, you explicitly allow the organizers of the Model 
   Checking Contest to publish this tool in binary form on the MCC web site,
   so that experiments can be reproduced by others after the contest.
   Detailed information is available from http://mcc.lip6.fr/rules.php.

   Note: to submit a tool, it is not required to have submitted any model
   to the MCC Call for Models. However, it is strongly recommended to
   pre-register your tool using the dedicated form before February 1, 2018:
   http://mcc.lip6.fr/registration.php. You will then be informed of
   the way the contest is going. The sooner is the better.

IMPORTANT DATES

        Jan. 25, 2018: publication of the updated 2018 contest rules at
                http://mcc.lip6.fr/rules.php

        Jan. 25, 2018: publication of the Tool Submission Kit, which is
                available from 
http://mcc.lip6.fr/archives/ToolSubmissionKit.tar.gz

        Feb. 1, 2018: deadline for tool pre-registration
                If you plan to submit a tool to the contest, please fill in
                the pre-registration form available from
                http://mcc.lip6.fr/registration.php

        Apr. 15, 2018: deadline for tool submission

        Apr. 30, 2018: early feedback to tool submitters, following the
                preliminary qualification runs, which are performed using
                a few small instances of the "known" models.

        June 1st, 2018: more feedback to tool submitters, following the
                competition runs

        June 26, 2018: official announcement of MCC'2018 results
                during the Petri Net conference (Bratislava, Slovakia)

COMMITTEES

        General Chairs
           Didier Buchs - Univ. Geneva, Switzerland
           Fabrice Kordon - UPMC, France

    Execution Monitoring Board
           Francis Hulin-Hubard - CNRS and ENS de Cachan, France
           Fabrice Kordon - UPMC, France

        Tool Board
           Marco Beccuti - Univ. Torino, Italy
           Monika Heiner - Univ. Cottbus, Germany
           Jeroen Meijer - Univ. Twente, Netherlands
           Franck Pommereau - Univ. Evry, France
           Christian Rohr - Univ. Cottbus, Germany
           Jiri Srba - Univ. Aalborg, Denmark

--------------------------------------------------------------------------------------
Fabrice Kordon          
Sorbonne Université
Campus Pierre & Marie Curie
LIP6/MoVe, Bureau 26/25-212
4 place Jussieu, 75252 Paris Cedex 05
http://lip6.fr/Fabrice.Kordon/ <http://lip6.fr/Fabrice.Kordon/>
----
[[ Petri Nets World:                                                ]]
[[              http://www.informatik.uni-hamburg.de/TGI/PetriNets/ ]]
[[ Mailing list FAQ:                                                ]]
[[ http://www.informatik.uni-hamburg.de/TGI/PetriNets/pnml/faq.html ]]
[[ Post messages/summary of replies:                                ]]
[[                               [email protected] ]]

Reply via email to