MODEL CHECKING CONTEST 2020 - (2/2) - CALL FOR TOOLS

GOALS

   The Model Checking Contest (MCC) is a yearly event that assesses existing
   verification tools for concurrent systems on a set of models (i.e., 
   benchmarks) proposed by the scientific community. All tools are compared on 
   the same benchmarks and using the same computing platform, so that a fair 
   comparison can be made, contrary to most scientific publications, in which
   different benchmarks are executed on different platforms.

   Another goal of the Model Checking Contest is to infer conclusions about
   the respective efficiency of verification techniques for Petri nets
   (decision diagrams, partial orders, symmetries, etc.) depending on the
   particular characteristics of models under analysis. Through the feedback
   on tools efficiency, we aim at identifying which techniques can best tackle
   a given class of models.

   Finally, the Model Checking Contest seeks to be a friendly place where
   developers meet, exchange, and collaborate to enhance their verification
   tools.

   The Model Checking Contest is organized in three steps: 
   - the Call for Models (http://mcc.lip6.fr/cfm.php),
   - the present Call for Tools (http://mcc.lip6.fr/cft.php),
   - and the Contest itself.

CALL FOR TOOLS

   For the 2020 edition, we kindly ask the developers of verification tools for
   concurrent systems to participate in the MCC competition. Each submitted
   tool will be assessed on both the accumulated collection of MCC models
   (these are the "known" models, see http://mcc.lip6.fr/models.php) and on the
   new models selected during the 2020 edition (these are the "surprise"
   models, see http://mcc.lip6.fr/cfm.php).

   The benchmarks on which tools will be assessed, are colored Petri nets 
   and/or P/T nets. Some P/T nets are provided with additional information 
   giving a hierarchical decomposition into sequential machines (these models
   are called Nested-Units Petri nets - see http://mcc.lip6.fr/nupn.php for
   more information): tools may wish to exploit this information to increase
   performance and scalability.

   Each tool may compete in one or more categories of verification tasks, such
   as reachability analysis, evaluation of CTL formulas, of LTL formulas, etc.

   Submitted tools must be in binary executables. Each submitted tool will be
   run by the MCC organizers in a virtual machine (typically configured with up
   to 4 cores, 16 Gbytes RAM per core, and a time confinement of 30 of 60
   minutes per run, i.e., per instance of a model). Last year, more than 1800
   days of CPU time have been invested in the MCC competition. The MCC relies
   on BenchKit (https://github.com/fkordon/BenchKit), a dedicated execution
   environment for monitoring and collecting data related to the execution of
   processes.

   By submitting a tool, you explicitly allow the organizers of the Model
   Checking Contest to publish to publish on the MCC web site the binary
   executable of this tool, so that experiments can be reproduced by others
   after the contest. Detailed information is available from
   http://mcc.lip6.fr/rules.php.

   Note: to submit a tool, it is not required to have submitted any model
   to the MCC Call for Models. However, it is strongly recommended to
   pre-register your tool using the dedicated form before March 1, 2020:
   http://mcc.lip6.fr/registration.php. You will then be informed of
   the way the contest is going. The sooner is the better.

IMPORTANT DATES

   - January 15, 2020: publication of the present Call for Tools.

   - January 25, 2020: publication of the updated 2020 contest rules at 
     http://mcc.lip6.fr/rules.php.

   - January 30, 2020: publication of the Tool Submission Kit, which is 
available
     from http://mcc.lip6.fr/archives/ToolSubmissionKit.tar.gz.

   - March 1, 2020: deadline for tool pre-registration. If you plan to submit a
     tool to the contest, please fill in the pre-registration form available
     from http://mcc.lip6.fr/registration.php.

   - April 15, 2020: deadline for tool submission.

   - May 1, 2020: early feedback to tool submitters, following the preliminary
     qualification runs, which are performed using a few small instances of the
     "known" models.

   - June, 2020: more feedback to tool submitters, following the
     competition runs.

   June 23, 2020: official announcement of MCC'2020 within the context of
   Petri Net 2020 (Paris, France).

COMMITTEES

        General Chairs
           Didier Buchs - Univ. Geneva, Switzerland
           Fabrice Kordon - Sorbonne Université/LIP6, France

        Model Board
           Hubert Garavel - Inria/LIG, France
            Lom Messan Hillah - Univ. Paris Nanterre, France
           Fabrice Kordon - Sorbonne Université, France     

        Formula Board
           Loïg Jezequel - Univ. Nantes, France
           Emmanuel Paviot-Adet - Univ. Paris 5, France

----
[[ Petri Nets World:                                                ]]
[[              http://www.informatik.uni-hamburg.de/TGI/PetriNets/ ]]
[[ Mailing list FAQ:                                                ]]
[[ http://www.informatik.uni-hamburg.de/TGI/PetriNets/pnml/faq.html ]]
[[ Post messages/summary of replies:                                ]]
[[                               [email protected] ]]

Reply via email to