****************************
Apologize for the multiple copies of this email.
Now with the right challenge link.
********************************************


We cordially invite you to participate in our ECCV’2022 Sign Spotting Challenge


Challenge description: To advance and motivate the research on Sign Language 
Recognition (SLR), the challenge will use a partially annotated continuous sign 
language dataset of more than 10 hours of video data in the health domain and 
will address the challenging problem of fine-grain sign spotting in continuous 
SLR. In this context, we want to put a spotlight on the strengths and 
limitations of the existing approaches, and define the future directions of the 
field. It will be divided in two competition tracks:

  1.  Multiple Shot Supervised Learning (MSSL) is a classical machine learning 
Track where signs to be spotted are the same in training, validation and test 
sets. The three sets will contain samples of signs cropped from the continuous 
stream of Spanish sign language, meaning that all of them have co-articulation 
influence. The training set contains the begin-end timestamps annotated by a 
deaf person and a SL-interpreter with a homogeneous criterion of multiple 
instances for each of the query signs. Participants will need to spot those 
signs in a set of validation videos with captured annotations. The signers in 
the test set can be the same or different to the training and validation set. 
Signers are men, women, right and left-handed.


  1.  One Shot Learning and Weak Labels (OSLWL) is a realistic variation of a 
one-shot learning problem adapted to the sign language specific problem, where 
it is relatively easy to obtain a couple of examples of a sign, using just a 
sign language dictionary, but it is much more difficult to find co-articulated 
versions of that specific sign. When subtitles are available, as in 
broadcast-based datasets, the typical approach consists of using the text to 
predict a likely interval where the sign might be performed. So in this track 
we simulate that case by providing a set of queries (isolated signs) and a set 
of video intervals around each and every co-articulated instance of the 
queries. Intervals with no instances of queries are also provided as negative 
groundtruth. Participants will need to spot the exact location of the sign 
instances in the provided video intervals.


Challenge webpage: https://chalearnlap.cvc.uab.cat/challenge/49/description/


Tentative Schedule:


  *   Start of the Challenge (development phase): April 20, 2022

  *   Start of test phase: June 17, 2022

  *   End of the Challenge: June 24, 2022

  *   Release of final results: July 1st, 2022


Participants are invited to submit their contributions to the associated 
ECCV’22 Workshop (https://chalearnlap.cvc.uab.cat/workshop/50/description/), 
independently of their rank position.


ORGANIZATION and CONTACT

Sergio Escalera 
<sergio.escalera.guerr...@gmail.com<mailto:sergio.escalera.guerr...@gmail.com>>,
 Computer Vision Center (CVC) and University of Barcelona, Spain

Jose L. Alba-Castro <ja...@gts.uvigo.es<mailto:ja...@gts.uvigo.es>>, atlanTTic 
research center, University of Vigo, Spain

Thomas B. Moeslund, Aalborg University, Aalborg, Denmark

Julio C. S. Jacques Junior, Computer Vision Center (CVC), Spain

Manuel Vázquez Enrı́quez, atlanTTic research center, University of Vigo, Spain

Computer Vision Center<http://www.cvc.uab.cat>
CONFIDENTIALITY WARNING<http://www.cvc.uab.es/?page_id=7475>

_______________________________________________
uai mailing list
uai@engr.orst.edu
https://it.engineering.oregonstate.edu/mailman/listinfo/uai

Reply via email to