Dear colleagues,

******Due to several requests, the submission deadline has been extended until 
December 1, 2023.***********



AAAI Workshop on Responsible Language Model (ReLM) 2024 organizers invite you 
to submit your research. 

 **Submission deadline:  December 1, 2023**

More info can be found on the Workshop website:

https://sites.google.com/vectorinstitute.ai/relm2024/home

The Responsible Language Models (ReLM) workshop focuses on both the theoretical 
and practical challenges related to the design and deployment of responsible 
Language Models (LMs) and will have strong multidisciplinary components, 
promoting dialogue and collaboration in order to develop more trustworthy and 
inclusive technology. We invite discussions and research on key topics such as 
bias identification & quantification, bias mitigation, transparency, privacy & 
security issues, hallucination, uncertainty quantification, and various other 
risks in LMs.

Topics: We are interested, but not limited to the following topics: 
explainability and interpretability techniques for different LLMs training 
paradigms; privacy, security, data protection and consent issues for LLMs; bias 
and fairness quantification, identification, mitigation and trade-offs for 
LLMs; robustness, generalization and shortcut learning analysis and mitigation 
for LLMs; uncertainty quantification and benchmarks for LLMs; ethical AI 
principles, guidelines, dilemmas and governance for responsible LLM development 
and deployment.

ReLM 2024 | 26 February 2024 | Vancouver, Canada

Looking forward to your submissions,

Organizing committee of ReLM 2024

relm.aaai2...@gmail.com
_______________________________________________
Corpora mailing list -- corpora@list.elra.info
https://list.elra.info/mailman3/postorius/lists/corpora.list.elra.info/
To unsubscribe send an email to corpora-le...@list.elra.info

Reply via email to