(apologies for multiple copies)

                       Last Call for Papers:
       JNLE Special Issue on Representation of Sentence Meaning
           Representation of Sentence Meaning: Where Are We?
   Details: http://ufal.mff.cuni.cz/jnle-on-sentence-representation/

*********************************************************************
         EXTENDED SUBMISSION DEADLINE: October 21, 2018
*********************************************************************

This is a call for papers for a special issue of Natural Language
Engineering (JNLE) on Representation of sentence meaning.

Linguistically, the basic unit of meaning is a sentence. Sentence
meaning has been studied for centuries, offering up representations that
reflect properties (or theories) of the syntax-semantic boundary (e.g.,
FGD, MTT, AMR), to representations with the properties of complex, but
expressive logics (e.g. intensional logic). Recent success of neural
networks in natural language processing (especially at the lexical
level) has raised the possibility of representation learning of sentence
meaning, i.e. observing the continuous vector space in a hidden layer of
a deep learning system trained to perform one of more specific tasks.

Multiple workshops have explored this possibility in the past few years,
e.g. Workshop on Representation Learning for NLP (2016, 2017;
https://sites.google.com/site/repl4nlp2017/), Workshop on Evaluating
Vector Space Representations for NLP (2016, 2017;
https://repeval2017.github.io/), Representation Learning
(https://simons.berkeley.edu/workshops/machinelearning2017-2) or the
Dagstuhl seminar (http://www.dagstuhl.de/17042).

Interesting behaviour and properties of continuous representations have
been already observed. For lexical representations (embeddings), their
linear combination in word vector space has been taken to correspond to
different semantic relations between them (Mikolov et al., 2013).
Learned representations can be evaluated intrinsically in terms of
various similarities, although this type of evaluation suffers some well
known problems (Faruqui et al., 2016), or extrinsically in terms of
performance in downstream tasks or relation to cognitive processes (e.g.
Auguste et al., 2017).

Continuous representations of sentences are comparably harder to produce
and assess. The first question is whether the representation should be
of a fixed size as with word embeddings, or whether it should reflect
the length of the sentence, e.g. a matrix of encoder states along the
sentence. The variable-length representation can be flat or capture the
hierarchical structure of the sentence and simple operations such as
matrix multiplication can serve as the basis of meaning compositionality
(Socher et al., 2012). Empirical results to date are mixed:
bidirectional gated RNNs (BiLSTM, BiGRU) with attention, corresponding
to variable-length representations, seem the best empirical solution
when trained directly for a particular NLP task (POS tagging, named
entity recognition, syntactic parsing, reading comprehension, question
answering, text summarization, machine translation). If the task is not
to be constrained a priori, researchers have advocated universal
sentence representations, which can be trained on one task (e.g.
predicting surrounding sentences in Skip-Thoughts) and tested on a range
of others. Training universal sentence representations on sentence pairs
manually annotated for entailment (natural language inference, NLI)
leads to a better performance despite the much smaller training data
(Conneau et al., 2017). In both cases, there is a lack of analysis of
the learned vector space from the perspective of linguistic adequacy:
which phenomena are directly reflected in the space, if any? Semantic
similarity (paraphrasing)? Various oppositions? Gradations (in number,
tense)? Entailment? Compositionality (e.g. relations between main and
adjunct and/or subordinate clauses)?

TreeLSTMs have the capacity to learn a latent grammar when trained e.g.
to classify sentence pairs in terms of entailment. They seem to perform
well, and yet the representation that is learned does not conform to
traditional syntax or semantics (Williams at el., 2017).

The reason for proposing this special issue is that presentation and
discussion of sentence-level meaning representation is fragmented across
many fora (conferences, workshops, but also pre-prints only). We believe
that some unified vision is needed in order to support coherent future
research. The goal of the proposed special issue of Natural Language
Engineering is thus to broadly map the state of the art in continuous
sentence meaning representation and summarize the longer-term goals in
representing sentence meaning in general.

Can deep learning for particular tasks get us to representations similar
to the results of formal semantics? Or is a single formal definition of
sentence meaning and elusive goal, are universal sentence embeddings
impossible, e.g. because there is no such entity observable in human
cognition?

The special issue will seek long research papers, surveys and position
papers addressing primarily the following topics:

* Which properties of meaning representations are most desirable,
  universally.
* Comparisons of types of meaning representations (e.g. fixed-size vs.
  variable-length) and methods for learning them.
* Techniques of explorations of learned meaning representations.
* Evaluation methodologies for meaning representations, including
  surveys thereof.
* Extrinsic evaluation by relations to cognitive processes.
* Relation between traditional symbolic meaning representations and the
  learned continuous ones.
* Broad summaries of psycholinguistic evidence describing properties of
  meaning representation in the human brain.

More details are available at:
* http://ufal.mff.cuni.cz/jnle-on-sentence-representation/

Schedule:
* 31st July 2018: Abstract submission deadline (to allow preempting overlaps
  of survey-like articles)
* 21st October 2018: Extended submission deadline
* 9th December 2018: Deadline for reviews and responses to authors
* 10th February 2019: Camera-ready deadline

Guest Editors of the special issue:
* Ondřej Bojar (Charles University)
* Raffaella Bernardi (University of Trento)
* Holger Schwenk (Facebook AI Research)
* Bonnie Webber (University of Edinburgh)

Guest Editorial Board:
* Omri Abend (Hebrew University of Jerusalem)
* Marco Baroni (Facebook AI Research, University of Trento)
* Bob Coecke (University of Oxford)
* Alexis Conneau (Facebook AI Research)
* Katrin Erk (University of Texas at Austin)
* Orhan Firat (Google)
* Albert Gatt (University of Malta)
* Caglar Gulcehre (Google)
* Aurelie Herbelot (Center for Mind/Brain Sciences, University of Trento)
* Eva Maria Vecchi (University of Cambridge)
* Louise McNally (Universitat Pompeu Fabra)
* Laura Rimell (DeepMind)
* Mernoosh Sadrzadeh (Queen Mary University of London)
* Hinrich Schuetze (Ludwig Maximilian University of Munich)
* Mark Steedman (University of Edinburgh)
* Ivan Titov (University of Edinburgh)

-- 
Ondrej Bojar (mailto:o...@cuni.cz / bo...@ufal.mff.cuni.cz)
http://www.cuni.cz/~obo

_______________________________________________
Moses-support mailing list
Moses-support@mit.edu
http://mailman.mit.edu/mailman/listinfo/moses-support

Reply via email to