***************************************************************************************************

Workshop on Responsible Decision Making in Dynamic Environments

Dates: July 23, 2022.

Location: Baltimore, USA (co-located with ICML)

https://responsibledecisionmaking.github.io/

Organizers: V. Do, T. Joachims, A. Lazaric, J. Pineau, M. Pirotta, H.
Satija, N. Usunier

***************************************************************************************************

1. Paper Submission

We invite submissions from the entire spectrum of responsible
decision-making in dynamic environments, from theory to practice. In
particular, we encourage submissions on the following topics:

   -

   Fairness,
   -

   Privacy and security,
   -

   Robustness,
   -

   Conservative and safe algorithms,
   -

   Explainability and interpretability.



Authors can submit a 4 pages paper in ICML format (excluding references)
that will be reviewed by the program committee. The papers can present new
work or give a summary of recent work of the author(s). Papers submitted at
NeurIPS are welcome. All accepted papers will be considered for the poster
sessions.

Outstanding papers will also be considered for a 15 minutes oral
presentation.

Submission deadline: 31 May 2022 (Anywhere on Earth)

Notification: 13 June 2022



Page limit: 4 pages excluding references.

Paper format: ICML format, anonymous.

Submission website: https://cmt3.research.microsoft.com/RDMDE2022

2. Description

Algorithmic decision-making systems are increasingly used in sensitive
applications such as advertising, resume reviewing, employment, credit
lending, policing, criminal justice, and beyond. The long-term promise of
these approaches is to automate, augment and/or eventually improve on the
human decisions which can be biased or unfair, by leveraging the potential
of machine learning to make decisions supported by historical data.
Unfortunately, there is a growing body of evidence showing that the current
machine learning technology is vulnerable to privacy or security attacks,
lacks interpretability, or reproduces (and even exacerbates) historical
biases or discriminatory behaviors against certain social groups.

Most of the literature on building socially responsible algorithmic
decision-making systems focus on a static scenario where algorithmic
decisions do not change the data distribution. However, real-world
applications involve nonstationarities and feedback loops that must be
taken into account to measure and mitigate unfairness in the long-term.
These feedback loops involve the learning process which may be biased
because of insufficient exploration, or changes in the environment's
dynamics due to strategic responses of the various stakeholders. From a
machine learning perspective, these sequential processes are primarily
studied through counterfactual analysis and reinforcement learning.

The purpose of this workshop is to bring together researchers from both
industry and academia working on the full spectrum of responsible
decision-making in dynamic environments, from theory to practice. In
particular, we encourage submissions on the following topics:

   -

   Fairness,
   -

   Privacy and security,
   -

   Robustness,
   -

   Conservative and safe algorithms,
   -

   Explainability and interpretability.



3. Organizing Committee



Virginie Do (Université Paris Dauphine – PSL and Meta AI)

Thorsten Joachims (Cornell University)

Alessandro Lazaric (Meta AI)

Joelle Pineau (McGill University and Meta AI)

Matteo Pirotta (Meta AI)

Harsh Satija (McGill University and Mila, Montreal)

Nicolas Usunier (Meta AI)



For more information, see https://responsibledecisionmaking.github.io/
_______________________________________________
uai mailing list
uai@engr.orst.edu
https://it.engineering.oregonstate.edu/mailman/listinfo/uai

Reply via email to