** Job openings: PhD studentships on meaning variation in NLP **

Utrecht University, The Netherlands

The Natural Language Processing  (NLP) group in the Computing and Information 
Sciences department of Utrecht University (UU) is offering PhD positions in AI 
/ NLP.

Three four-year positions are available as part of the AiNed project “Dealing 
with Meaning Variation in NLP”, a collaboration between AI and Data Science and 
the Language Sciences Institute led by Prof. Massimo Poesio. The overall aim of 
the project is to allow NLP models to make better sense of variations in the 
ways that different speakers and readers interpret language. Positions are 
available to of the following   projects:

PhD PROJECT 1: Variation in coreference and reference

Early research on learning from data with disagreement in Natural Language 
Processing (NLP) was often motivated by findings about anaaphoic reference -  
it turns out that often people disagree on what these pronouns mean, 
particularly in conversations. Methods for learning from data with 
disagreements (`learning from crowds’) have been successfully applied to other 
types of data containing disagreements, and substantial data sets containing 
multiple judgments on anaphoric reference now exist. But computational models 
of referring expression interpretation that can effectively learn from such 
data sets do not yet exist. Training co-reference models ‘from crowds’ has 
proven to be challenging, and there is no consensus over the question of how to 
test/evaluate interpretation models that take variation into account. This 
project will focus on addressing such challenges. It will also develop metrics 
that do justice to interpretative variation for co-reference, and use these 
metrics to test models. Ideally, the development of these metrics will be 
informed by cognitive and behavioural evidence on the processing of reference.

For this project, we are looking for a motivated researcher with a Master’s 
degree in Artificial Intelligence, Deep Learning, Computational Cognitive 
Science, Computer Science, Linguistics, or Statistics. A good mastery of deep 
learning and of NLP is essential. An understanding of coreference and discourse 
understanding would be a definite bonus.

PhD PROJECT 2: Subjectivity in the detection of problematic language

Variation in interpretation is particularly frequent with judgments that depend 
on an individual’s subjective biases, such as deciding whether a joke is funny 
or not. This PhD project focuses on NLP methods for subjective interpretive 
tasks with a high societal relevance, such as offensive/abusive language 
detection, used e.g., by social media platforms to identify cases of 
problematic use of language that can be harmful to people. Judgments on whether 
a given utterance is problematic are notoriously subjective, where differences 
between judges can have difficult cultural, ethnic, and racial overtones. The 
project will develop models for detecting problematic language that take into 
account the fact that the judgments involved can be controversial.

For this project, we are looking for a motivated researcher with a Master’s 
degree in Artificial Intelligence, Computational Cognitive Science, Computing 
Science, Computational Social Science, Linguistics, or Statistics. A good 
mastery of deep learning and of NLP is essential. An understanding of social 
science methodology would be a definite bonus.

PhD PROJECT 3: Conflicting interpretations in dialogue

In conversations, we produce language under time pressure. One of the effects 
of this time pressure is that less attention is paid ensuring that expressions 
can be interpreted univocally, resulting in misunderstandings that often go 
undetected. Such misunderstandings between dialogue partners cause problems for 
all aspects of NLP research. The first problem is that specifying that an 
expression was interpreted in one way by one participant and in another way by 
the other participant is difficult with present annotation methods. In turn, 
this makes it difficult to train models that can produce participant-specific 
interpretations and/or recognise disagreements in interpretation. In this 
project you will study misunderstandings in dialogue and how conversational 
agents can recognise and resolve them.

For this project, we are looking for a  motivated researcher with  Master’s 
degree in  Artificial Intelligence, Computational Cognitive Science, Computing 
Science, Conversational Agents, Linguistics, or Statistics.     A good mastery 
of deep learning, of dialogue, and of conversational agents is essential.

FOR MORE INFORMATION AND TO APPLY

Further information about these vacancies can be found at:

Project 1: 
https://www.uu.nl/en/organisation/working-at-utrecht-university/jobs/phd-position-in-natural-language-processing-variation-in-co-reference-and-reference-08-10-fte
Project 2: 
https://www.uu.nl/en/organisation/working-at-utrecht-university/jobs/phd-position-in-natural-language-processing-subjectivity-in-the-detection-of-problematic-language-08
Project 3: 
https://www.uu.nl/en/organisation/working-at-utrecht-university/jobs/phd-position-in-natural-language-processing-conflicting-interpretations-in-dialogue-08-10-fte

The deadline for application is December 3rd  2023. We’re looking for someone 
to start as soon as possible after the recruitment process is concluded but we 
understand that it will normally take a few months before the candidate will be 
ready to start.

Applications should be made through the University's site (see links above).

CONTACTS

For further information, please contact:
- Prof. Massimo Poesio (m.poesio AT uu.nl) (all projects)
- Project 1: Prof. Yoad Winter (y.winter AT uu.nl)
- Project 2: Dr. Dong Nguyen (d.p.nguyen AT uu.nl) or Prof. Antal van der Bosch 
(a.p.j.vandenbosch AT uu.nl)
- Project 3: Prof. Albert Gatt (a.gatt AT uu.nl) or Dr. Denis Paperno 
(d.paperno AT uu.nl)
_______________________________________________
Corpora mailing list -- corpora@list.elra.info
https://list.elra.info/mailman3/postorius/lists/corpora.list.elra.info/
To unsubscribe send an email to corpora-le...@list.elra.info

Reply via email to