We welcome you to the next Natural Language Processing and Vision (NLPV) 
seminars at the University of Exeter.

Talk 1
Scheduled: Thursday 16 Oct 2025 at 13:00 to 14:00, GMT+1
Location: 
https://Universityofexeter.zoom.us/j/94505914598?pwd=eXonSrKHuxUNnMCAmiZHicXqbJis8f.1
 (Meeting ID: 945 0591 4598 Password: 903478)

Title: This One or That One? A Bilingual Study on Accessibility via 
Demonstratives with Multimodal Large Language Models

Abstract: Accessibility describes how easily a speaker can obtain or interact 
with an object, and it is often conveyed through demonstrative pronouns like 
“this" and “that" in English or “这” (zhè) and “那” (nà) in Chinese, indicating 
proximal or distal objects. The proximal vs. distal distinction is not 
absolute, since it depends on the speaker's viewpoint. 
Are Multimodal Large Language Models (MLLMs) able to solve accessibility 
problems based on demonstratives? In this talk, I would like to present some 
preliminary results on a referent identification task based on a bilingual 
(English and Chinese), multimodal dataset. In our experiments, all models show 
significant struggles, and particularly when perspective shifts are introduced.

Speaker's bio: Emmanuele Chersoni got a joint PhD in Language Sciences from 
Aix-Marseille University and the University of Pisa in 2018, under the 
supervision of Philippe Blache and Alessandro Lenci. Since 2021, he is an 
Assistant Professor in Computational Linguistics at the Department of Language 
Science and Technology of The Hong Kong Polytechnic University. His main 
research interests include classical distributional semantic models, thematic 
fit modeling, semantic relations and natural language processing for 
specialized domains. He has also served as a co-organizer of the *ACL workshop 
series on Cognitive Modeling and Computational Linguistics from 2019 to 2022.

Talk 2
Scheduled: Thursday 23 Oct 2025 at 15:00 to 16:00, GMT+1
Location: 
https://Universityofexeter.zoom.us/j/92868830537?pwd=0yvSNEwhIeC3x2Mxn76zOryufcK5Fi.1
 (Meeting ID: 928 6883 0537 Password: 100657)

Title: Beyond One-Size-Fits-All: Inversion Learning for Highly Effective NLG 
Evaluation Prompts

Abstract: Evaluating natural language generation (NLG) systems is inherently 
challenging. While human evaluation remains the gold standard, it is difficult 
to scale and often suffers from inconsistencies and demographic biases. 
LLM-based evaluation offers a scalable alternative but is highly sensitive to 
prompt design, where small variations can lead to significant discrepancies. In 
this talk, I will introduce an inversion learning method that learns effective 
reverse mappings from model outputs back to their input instructions, enabling 
the automatic generation of highly effective, model-specific evaluation 
prompts. This method is simple, requires only a single evaluation sample, and 
eliminates the need for manual prompt engineering, thereby improving both the 
efficiency and robustness of LLM-based evaluation.

Speaker's bio: Chenghua Lin is a Full Professor and Chair in Natural Language 
Processing in the Department of Computer Science at The University of 
Manchester. His research lies at the intersection of machine learning and 
natural language processing, with a focus on language generation, multimodal 
LLMs, and evaluation methods. He currently serves as Chair of the ACL SIGGEN 
Board, a member of the IEEE Speech and Language Processing Technical Committee, 
and Associate Editor for Computer Speech and Language. He has received several 
prizes and awards for his research and academic leadership, including the CIKM 
Test-of-Time Award, the INLG Best Paper Runner-up Award, and an Honourable 
Mention for the Scottish Informatics and Computer Science Alliance (SICSA) 
Supervisor of the Year Award. He has also held numerous program and chairing 
roles for *ACL conferences, including Documentation Chair for ACL’25, 
Publication Chair for ACL’23, Workshop Chair for AACL-IJCNLP’22, Program Chair 
for INLG’19, and Senior Area Chair for EMNLP’20, ACL’22–’23, EACL’23, NAACL’25, 
and AACL’25.

We will update future talks on the website: 
https://sites.google.com/view/neurocognit-lang-viz-group/seminars 
 
Joining our *Google group* for future seminar and research information: 
https://groups.google.com/g/neurocognition-language-and-vision-processing-group
_______________________________________________
Corpora mailing list -- [email protected]
https://list.elra.info/mailman3/postorius/lists/corpora.list.elra.info/
To unsubscribe send an email to [email protected]

Reply via email to