— Apologies for cross-posting —

Special Issue "Explainable User Models"

A special issue of Multimodal Technologies and 
Interaction<https://www.mdpi.com/journal/mti> (ISSN 2414-4088).

Important Dates & Facts:
Abstract/title submission: ideally until November 5, 2021
Manuscripts due by: February 20, 2022
Notification to authors: March 15, 2022

Website: https://www.mdpi.com/journal/mti/special_issues/Explainable_User_Models


Special Issue Information

This special issue addresses research on Explainable User Models. As AI 
systems’ actions and decisions will significantly affect their users, it is 
important to be able to understand how an AI system represents its users. It is 
a well-known hurdle that many AI algorithms behave largely as black boxes. One 
key aim of explainability is, therefore, to make the inner workings of AI 
systems more accessible and transparent.

Such explanations can be helpful in the case when the system uses information 
about the user to develop a working representation of the user, and then uses 
this representation to adjust or inform system behavior. E.g., an educational 
system could detect whether students have a more internal or external locus of 
control, a music recommender system could adapt the music it is playing to the 
current mood of a user, or an aviation system could detect the visual memory 
capacity of its pilots. However,  when adapting to such user models it is 
crucial that these models are accurately detected. Furthermore, for such 
explanations to be useful, they need to be able to explain or justify their 
representations of users in a human-understandable way. This creates a 
necessity for techniques that will create models for the automatic generation 
of satisfactory explanations intelligible for human users interacting with the 
system.

The scope of the special issue includes but is not limited to:

Detection and Modelling
• Novel ways of Modeling User Preferences
• Types of information to model (Knowledge, Personality, Cognitive differences, 
etc.)
• Distinguishing between stationary versus transient user models (e.g., 
Personality vs Mood)
• Context modeling (e.g., at work versus at home, lean in versus lean out 
activities)
• User models from heterogeneous sources (e.g., behavior, ratings, and reviews)
• Enrichment and Crowdsourcing for Explainable User Models

Ethics
• Detection of sensitive or rarely reported attributes (e.g., gender, race, 
sexial orientation)
• Implicit user modeling versus explicit user modeling (e.g., questionnaires 
versus inference from behavior)
• User modeling for self actualization (e.g., user modeling to improve dietary 
or news consumption habits)

Human understandability
• Metrics and methodologies for evaluating fitness for the purpose of 
explanations
• Balancing completeness and understandability for complex user models
• Explanations to mitigate human biases (e.g., confirmation bias, anchoring)
• Effect of user model explanation on subsequent user interaction (e.g., 
simulations, and novel evaluation methodologies)

Effectiveness
• Analysis or comparison of context of use of explanation (e.g., risk, time 
pressure, error tolerance)
• Analysis of context of use of system (e.g., decision support, prediction)
• Analysis or comparison of effect of explaining in specific domains (e.g., 
education, health, recruitment, security)

Adaptive presentation of the explanations
• For different types of user
• Interactive explanations
• Investigation of which presentational aspects are beneficial to tailor in the 
explanation (e.g., level of detail, terminology, modality text or graphics, 
level of interaction)


Prof. Dr. Nava Tintarev
Ms. Oana Inel
Guest Editors
_______________________________________________
uai mailing list
uai@engr.orst.edu
https://it.engineering.oregonstate.edu/mailman/listinfo/uai

Reply via email to