Dear Colleague,

[Apologies for multiple postings.]

We are pleased to announce that the deadline for submitting solvers to the  
2016 UAI Probabilistic Inference Evaluation has been extended to October 31, 
2016.

In this evaluation, we will evaluate inference algorithms for discrete 
probabilistic graphical models. The evaluation/competition has been held every 
two years over the past 10 years at UAI, starting in 2006.
Details about how to submit inference solvers/software, input and output 
formats can be found on the evaluation webpage given below:
http://www.hlt.utdallas.edu/~vgogate/uai16-evaluation/

The evaluation is already underway and you can see the initial results below:
http://www.hlt.utdallas.edu/~vgogate/uai16-evaluation/tuning.html

This year we will evaluate solvers for the following tasks:
(1)  Probability of Evidence or Partition Function Computation (PR task or 
sum-product task)
(2)  Marginal Probability Estimation (MAR task or ratio-sum-product task)
(3)  Maximum a Posteriori Estimation (MAP task or max-product task)
(4)  Marginal Maximum a Posteriori Estimation (MMAP task or max-sum-product 
task)

New this year.
This year we are introducing two additional file formats that allow 
specification of sparse graphical models in addition to the conventional UAI 
format:
(1)  Sparse UAI format (see 
http://www.hlt.utdallas.edu/~vgogate/uai16-evaluation/sparseformat.html)
(2)  Binary Clausal format (see 
http://www.hlt.utdallas.edu/~vgogate/uai16-evaluation/binaryformat.html)

Thus, we will have 12 evaluation categories (4 tasks x 3 formats).

Notable features of the evaluation:

  *   You have the option of keeping your solver name(s) anonymous. Since this 
is an evaluation and not a competition, there will be no winners and/or losers. 
However, if your solver/software is a top performer on a number of benchmark 
categories, we will contact you and disclose your identity if you give consent.
  *   We strongly encourage submission of solvers having strong theoretical 
guarantees (e.g., relative-error guarantee, lower bound guarantee, 
epsilon-delta relative error guarantee, etc.). We will categorize such solvers 
appropriately.
  *   We have divided the benchmark graphical models into two sets: 
training/tuning instances and test instances. The setup is such that you will 
be able to see the results of your solver on the training/tuning instances 
within 24-72 hours. Results on the test instances will be made public only 
after the submission deadline (October 31).

Important dates/deadlines:
Solver submission deadline *NEW*:  OCTOBER 31, 2016

For more information and questions, please contact me, Vibhav Gogate 
([email protected]<mailto:[email protected]>).

Cheers,
Vibhav Gogate
Assistant Professor
The University of Texas at Dallas
http://www.hlt.utdallas.edu/~vgogate/index.html
_______________________________________________
uai mailing list
[email protected]
https://secure.engr.oregonstate.edu/mailman/listinfo/uai

Reply via email to