Please, accept our apologies in case of multiple copies of this CFP. 

*************************************************************************************************************************************
 

The 5th International Workshop on EXplainable and TRAnsparent AI and 
Multi-Agent Systems (EXTRAAMAS) 

https://urldefense.com/v3/__https://extraamas.ehealth.hevs.ch/index.html__;!!HJOPV4FYYWzcc1jazlU!4R0Jy7lZUQR2njT-gH8AR7liX-4x3Dgg9mJYs6F0Jm58PjyHxaIu6mAyXxbUpLW6aLtSPY9VzsYdrjo6aFXS4KSuIqPj$
  

in conjunction with AAMAS 2023, London, 29 May- 2 June 2023 

https://urldefense.com/v3/__https://aamas2023.soton.ac.uk/__;!!HJOPV4FYYWzcc1jazlU!4R0Jy7lZUQR2njT-gH8AR7liX-4x3Dgg9mJYs6F0Jm58PjyHxaIu6mAyXxbUpLW6aLtSPY9VzsYdrjo6aFXS4Cj4wASK$
  

**************************************************************************************************************************************
 
Aim and Scope 
============== 

Running since 2019, EXTRAAMAS is a well-established workshop and forum on 
EXplainable and TRAnsparent AI and Multi-Agent Systems. It aims to discuss and 
disseminate research on explainable artificial intelligence, with a particular 
focus on intra/inter-agent explainability and cross-disciplinary perspectives. 
In its 5th edition, EXTRAAMAS identifies four particular focus topics with the 
ultimate goal of strengthening cutting-edge foundational and applied research. 
This of course comes in addition to the main theme of the workshop, focused as 
usual on XAI fundamentals. The four tracks for this year are: 
- Track 1: XAI in symbolic and subsymbolic AI: the “AI dichotomy” separating 
symbolic AKA classical AI from connectionism AI has been persistent for more 
than seven decades. Nevertheless, the advent of explainable AI has accelerated 
and intensified the efforts to bridge this gap, since providing faithful 
explanations of black-box machine learning techniques would necessarily mean 
combining symbolic and subsymbolic AI. This track aims at discussing the recent 
works on this hot topic of AI. 
Track chair: Dr. Giovanni Ciatto, University of Bologna, Italy. 

- Track 2: XAI in negotiation and conflict resolution: Conflict resolution 
(e.g., agent-based negotiation, voting, argumentation, etc.) has been a 
prosperous domain within the MAS community since its foundation. However, as 
agents and the problems they are tackling become more complex, incorporating 
explainability becomes vital to assess the usefulness of the supposedly 
conflict-free solution. This is the main topic of this track, with a special 
focus on MAS negotiation and explainability. 
Track Chair: Dr. Reyhan Aydoğan: Ozyegin University, Turkey 

- Track 3: Explainable Robots and Practical Applications: Explainable robots 
have been one of the main topics of XAI for several years. The main interest of 
this track is to publish the latest works whose focus is notably on (i) the 
impact of embodiment on explanation, (ii) explainability for remote robots, 
(iii) how humans receive and perceive explanations by robots, and (iv) 
practical XAI applications & simulations. 
Track chair: Dr. Yazan Mualla, UTBM, France 

- Track 4: XAI in Law and Ethics: complying with regulation (e.g. GDPR) is 
among the main objectives for XAI. The right to explanation is key to ensuring 
transparency of ever more complex AI systems dealing with a multitude of 
sensitive AI applications. This track discusses works related to explainability 
in AI ethics, machine ethics, and AI & Law. 
Track chair: Rachele Cari, University of Bologna, Italy 

This year EXTRAAMAS will feature a keynote entitled “untrustworthy AI” 
delivered by Jeremy Pitt, Professor of Intelligent and Self-organizing Systems 
in the Department of Electrical and Electronic Engineering at Imperial College 
London (UK). 
Moreover, EXTRAAMAS will offer a tutorial on reusable explainable technologies 
given by Dr. Giovanni Ciatto and Mr. Victor Hugo Contreras. 

All accepted papers are eligible for publication in the Springer Lecture Notes 
of Artificial Intelligence conference proceedings (after revisions have been 
applied). 


Important Dates 
================ 

Paper submission: 10/03/2023 (extended) 
Notification of acceptance: 25/03/2023 
Workshop: 29/05/2023 
Camera-ready (for Springer post-proceedings): 10/06/2023 
Submission link: 
https://urldefense.com/v3/__https://easychair.org/conferences/?conf=extraamas2023__;!!HJOPV4FYYWzcc1jazlU!4R0Jy7lZUQR2njT-gH8AR7liX-4x3Dgg9mJYs6F0Jm58PjyHxaIu6mAyXxbUpLW6aLtSPY9VzsYdrjo6aFXS4J5MG8-t$
  


EXTRAAMAS Tracks and Topics 
=========================== 

# Track1: XAI in symbolic and subsymbolic AI 
-XAI for Machine learning 
-Explainable neural networks 
-Symbolic knowledge injection or extraction 
-Neuro-symbolic computation 
-Computational logic for XAI 
-Multi-agent architectures for XAI 
-Surrogate models for sub-symbolic predictors 
-Explainable planning (XAIP) 
-XAI evaluation 

# Track2: XAI in negotiation and conflict resolution 
-Explainable conflict resolution techniques/frameworks 
-Explainable negotiation protocols and strategies 
-Explainable recommendation systems 
-Trustworthy voting mechanisms 
-Argumentation for explaining the process itself 
-Argumentation for explaining and supporting the potential outcomes 
-Explainable user/agent profiling (e.g., learning user's preferences or 
strategies) 
-User studies and assessment of the aforementioned approaches 
-Applications (virtual coaches, robots, IoT) 

# Track3: Explainable Robots and Practical Applications 
-Explainable remote robots 
-Explainability and embodiment 
-Human-Robot collaboration 
-Practical XAI applications 
-Emotions in XAI 
-Perception in XAI 
-Human-Computer Interaction (HCI) studies 
-Communication and reception of explanations 

# Track4: (X)AI in Law, and Ethics 
-XAI in AI & Law 
-Fair AI 
-XAI & Machine Ethics 
-Bias reduction 
-Deception and XAI 
-Persuasive technologies and XAI 
-Nudging and XAI 
-Legal issues of XAI 
-Liability and XAI 
-XAI, Transparency, and the Law 
-Enforceability and XAI 
-Culture-aware systems and XAI 


Workshop Chairs 
=============== 

Dr. Davide Calvaresi, HES-SO, Switzerland 
research areas: Real-Time Multi-Agent Systems, Explainable AI, Blockchain, 
eHealth, Assistive/Embedded Systems 
mail: davide.calvar...@hevs.ch, web page, Google.scholar 

Dr. Amro Najjar, University of Luxembourg, Luxembourg 
research areas: Multi-Agent Systems, Explainable AI, Artificial Intelligence 
mail: amro.naj...@uni.lu, Google Scholar 

Prof. Kary Främling, Umeå University Sweden and Aalto University, Finland, 
research areas: Explainable AI, Artificial Intelligence, Machine Learning, 
Internet of Things, Systems of Systems 
mail: kary.framl...@cs.umu.se, web page, Google Scholar 

Prof. Andrea Omicini 
research areas: Artificial Intelligence, Multi-agent Systems, Software 
Engineering 
mail: andrea.omic...@unibo.it, web page, Google Scholar 


Track Chairs 
============ 

Dr. Giovanni Ciatto, University of Bologna, Italy 
mail: giovanni.cia...@unibo.it 

Dr. Reyhan Aydogan, Ozyegin University, Turkey 
mail: reyhan.aydo...@ozyegin.edu.tr 

Dr. Yazan Mualla, University of Technology of Belfort-Montbéliard 
mail: yazan.mua...@utbm.fr 

Rachele Carli, University of Bologna 
mail: rachele.car...@unibo.it 


Advisory Board 
============== 

Prof. Tim Miller, University of Melbourne. 
Prof. Leon van der Torre, University of Luxembourg 
Prof. Virginia Dignum, Umea University 
Prof. Michael Ignaz Schumacher 


Primary Contacts 
================ 

Davide Calvaresi - davide.calvar...@hevs.ch and Amro Najjar - 
amro.naj...@list.lu 
_______________________________________________
clean-list mailing list
clean-list@science.ru.nl
https://mailman.science.ru.nl/mailman/listinfo/clean-list

Reply via email to