(Apologies for multiple postings)

--------------------------------------------------------------------------------------------------

13th International Summer Workshop on Multimodal Interfaces (eNTERFACE'17) July 
03-28, 2017, Porto, Portugal.



http://artes.ucp.pt/enterface17



Call for Participation Extended Submission Deadline: May 07, 2017(Firm)

--------------------------------------------------------------------------------------------------


The Digital Creativity Centre (CCD), Universidade Catolica Portuguesa - School 
of Arts (Porto, Portugal) invites researchers from all over the world to join 
eNTERFACE'17, the 13th one-month Summer Workshop on Multimodal Interfaces. 
During this workshop, senior project leaders, researchers, and students gather 
in one single place to work in teams on pre-specified challenges for 4 weeks 
long. Each team has a project defined and will address specific challenges.



Senior researchers, PhD students, or undergraduate students interested in 
participating to the Workshop should send their application by emailing before 
7th of May 2017(extended) to [email protected] (Guidelines for 
researchers applying to a 
project<http://artes.ucp.pt/enterface17/authors-kit/Guidelines.for.researchers.applying.to.a.project_eNTERFACE17.html>).



Participants must procure their own travel and accommodation expenses. 
Information about the venue location and stay are provided on the eNTERFACE'17 
website. Note that although no scholarships are available for PhD students, 
there are no application fees. The list of projects that participants can 
choose from is the following.




How to Catch A Werewolf, Exploring Multi-Party Game-Situated Human-Robot 
Interaction
Lead-Organizers: Catharine Oertel, KTH (PI), Samuel Mascarenhas, INESC-ID, 
Zofia Malisz, KTH, José Lopes, KTH, Joakim Gustafson, KTH

In this project we will focus on the implementation of the roles of the 
"villager" and the "werewolves" using the IrisTK dialogue framework and the 
robot head Furhat. To be more precise, the aim of this project is to use 
multi-modal cues in order to inform the theory of mind model to drive the 
robot's decision making process. Theory of mind is a concept that is related to 
empathy and it refers to the cognitive ability of modeling and understanding 
that others have different beliefs and intentions than our own. In lay terms, 
it can be described as "to put oneself into another's shoes" and it is a 
crucial skill to properly play a deception game like "Werewolf".

Full Project 
Description<http://artes.ucp.pt/enterface17/proposals/02.Final%20Proposal_Catharine.Oertel.pdf>





KING'S SPEECH Foreign language: pronounce with style!
Principal investigators: Georgios Athanasopoulos*, Céline Lucas* and Benoit 
Macq* (ICTEAM-ELEN - Université Catholique de Louvain, Belgium)

The principal investigators are developing the GRAAL1 project which is 
concerned with developing a set of tools to facilitate self-training on foreign 
language pronunciation, with the first target being learning French. The goal 
of KING'S SPEECH is to develop new interaction modalities and evaluate them in 
combination with existing functionality aiming to better personalize GRAAL to 
the taste and specificities of each learner. This personalization will rely on 
a machine learning approach and an experimental set-up to be developed during 
eNTERFACE'17. The eNTERFACE'17 developments could be based on a karaoke 
scenario where the song is replaced by some authentic sentences (extracts of 
news, films, publicities, etc.). Applications like SingStar (Sony) or JustSing 
(Ubisoft) could also serve as a source of inspiration, e.g., using a smartphone 
as a microphone while interacting with avatars.

Full Project 
Description<http://artes.ucp.pt/enterface17/proposals/02.Final%20Project_King's%20speech.pdf>





The RAPID-MIX API: a toolkit for fostering innovation in the creative 
industries with Multimodal, Interactive and eXpressive (MIX) technology
Principal Investigators Francisco Bernardo, Michael Zbyszynski, Rebecca 
Fiebrink, Mick Grierson (EAVI - Embodied AudioVisual Interaction group, 
Goldsmiths University of London, Computing), Team Candidates Sebastian Mealla , 
Panos Papiotis (MTG/UPF - Music Technology Group, Universitat Pompeu Fabra), 
Carles Julia, Frederic Bevilacqua , Joseph Larralde (IRCAM - Institut de 
Recherche et Coordination Acoustique/Musique)

Members of the RAPID-MIX project are building a toolkit that includes a 
software API for interactive machine learning (IML),digital signal processing 
(DSP), sensor hardware, and cloud-based repositories for storing and 
visualizing audio, visual, and multimodal data. This API provides a 
comprehensive set of software components for rapid prototyping and integration 
of new sensor technologies into products, prototypes and performances.

We aim to investigate how developers employ and appropriate this toolkit so we 
can improve it based on their feedback. We intend to kickstart the online 
community around this toolkit with eNTERFACE participants as power users and 
core members, and to integrate their projects as demonstrators for the toolkit. 
Participants will explore and use the RAPID-MIX toolkit for their creative 
projects and learn workflows for using embodied interaction with sensors

Full Project 
Description<http://artes.ucp.pt/enterface17/proposals/02.Final%20Project_RAPID-MIX.pdf>





Prynth
Principal investigator: Ivan Franco (IDMIL / McGill University)

Prynth is a technical framework for building self-contained programmable 
synthesizers, developed by Ivan Franco at the Input Devices and Music 
Interaction Lab (IDMIL) of McGill University. The goal of this new framework is 
to support the rapid development of a new breed of digital synthesizers and 
their respective interaction models.

Full Project 
Description<http://artes.ucp.pt/enterface17/proposals/02.Final%20Proposal_prynth.pdf>





End-to-End Listening Agent for Audio-Visual Emotional and Naturalistic 
Interactions
Principal Investigators: Kevin El Haddad (TCTS Lab - numediart institute - 
University of Mons, Belgium), Yelin Kim (Inspire Lab - University at Albany, 
State University of New York, USA), Hüseyin Çakmak (TCTS Lab - numediart 
institute - University of Mons, Belgium)


In this project, we aim at building a listening agent that would react with a 
naturalistic and human-like behavior and using nonverbal expressions to a user. 
The agent's behavior will be modeled by and built on three main components: 
recognizing and synthesizing emotional and nonverbal expressions, and 
predicting the next expression to synthesize based on the currently recognized 
expressions. Its behavior will be rendered on a previously developed avatar 
which will also be improved during this workshop. At the end we should obtain 
functioning and efficient modules which ideally should work in real-time.

Full Project 
Description<http://artes.ucp.pt/enterface17/proposals/02.Final%20Proposal_listening%20agent.pdf>





Cloud-based Toolbox for Computer Vision
Principal investigator: Dr. Sidi Ahmed MAHMOUDI from the Faculty of Engineering 
at the University of Mons. Belgium. Candidates: Dr. Fabian LECRON, PhD, Faculty 
of Engineering at the University of Mons. Belgium, Mohammed Amin BELARBI, PhD 
Student, Faculty of Exact sciences and Mathematics, University of Mostaganem, 
Algeria, Mohammed EL ADOUI, PhD Student, Faculty of Engineering, University of 
Mons, Belgium, Abdelhamid DERRAR, Student in Master University of Lyon, France, 
Pr. Mohammed BENJELLOUN, PhD, Faculty of Engineering, University of Mons, 
Belgium, Pr. Said MAHMOUDI, PhD, Faculty of Engineering, University of Mons, 
Belgium.

Nowadays, images and videos have been present everywhere, they can come 
directly from camera, mobile devices or from other peoples that share their 
images and videos. The latter are used to present and illustrate different 
objects in a large number of situations (public areas, airports, hospitals, 
football games, etc.). This makes from image and video processing algorithms a 
very important tool used for various domains related to computer vision such as 
video surveillance, human behavior understanding, medical imaging and database 
(images and videos) indexation methods. The goal of this project is develop an 
extension of our cloud platform (MOVACP) developed in the previous edition of 
eNTERFACE'16 workshop. The latter integrated several image and video processing 
applications. The users of this platform can use these methods without having 
to download, install and configure the corresponding software. Each user can 
select the required application, load its data and retrieve results, with an 
environment similar to desktop. Within eNTERFAC'17 workshop, we would like to 
improve and develop four main tools for our platform: 1. Integration of the 
major image and video processing algorithms that could be used by guests to 
perform their own applications. 2. Integration of machine learning methods 
(used for images and videos indexation) that exploit the uploaded data of users 
(is they accept of course) in order to improve the results precision. 3. Fast 
treatment of data acquired from IOT systems. 4. Development of an online 3D 
viewer that could be used for the visualization of 3D reconstructed medical 
images. 4. Fast treatment of data acquired from distant IoT systems.

Keywords cloud computing, image and video processing, video surveillance, 
medical imaging.

Full Project 
Description<http://artes.ucp.pt/enterface17/proposals/02.Final%20Project_CMP.pdf>





Across the virtual bridge
Project Coordinators: Thierry RAVET (software design, motion signal processing, 
machine learning), Fabien GRISARD (software design, human-computer interface), 
Ambroise MOREAU (computer vision, software design), Pierre-Henri DE DEKEN 
(software design, game engine) - Numediart Institute, University of Mons, 
Belgium.


The goal of the project is to explore different ways of creating interactions 
between people evolving in the real world (local players) and people evolving 
in a virtual representation of the same world (remote players). This latter one 
will be explored thanks to a virtual reality headset while local players will 
be geo-located through an app on a mobile device. Actions executed by remote 
players will be perceived by local players in the form of a sound or visual 
content and actions performed by local players will impact the virtual world as 
well. Local players and remote players will be able to exchange information 
with each other.
Keywords: Virtual world, mixed reality, computer-mediated communication .

Full Project 
Description<http://artes.ucp.pt/enterface17/proposals/02.Final%20Project_AcrossTheVirtualBridge.pdf>





ePHoRt project: A telerehabilitation system for reeducation after hip 
replacement surgery
Principal investigators: Yves Rybarczyk (Nova University of Lisbon, Portugal), 
Arián Aladro (Universidad de las Américas, Ecuador), Mario Gonzalez (Health and 
Sport Science from University of Zaragoza - Spain), Santiago Villarreal 
(Universidad de las Américas - Quito, Ecuador), Jan Kleine Detersa (University 
of Twente in Human Media Interaction)

This project aims to develop a web-based system for the remote monitoring of 
rehabilitation exercises in patients after hip replacement surgery. The tool 
intends to facilitate and enhance the motor recovery, due to the fact that the 
patients will be able to perform the therapeutic movements at home and at any 
time. As in any case of rehabilitation program, the time required to recover is 
significantly diminished when the individual has the opportunity to practice 
the exercises regularly and frequently. However, the condition of such patients 
prohibits transportations to and from medical centres and many of them cannot 
afford a private physiotherapist. Thus, low-cost technologies will be used to 
develop the platform, with the aim to democratize its access. For instance, the 
motion capture system will be based on the Kinect camera that provides a good 
compromise between accuracy and price. The project will be divided into four 
main stages. First, the architecture of the web-based system will be designed. 
Three different user interfaces will be necessary: (i) one to record 
quantitative and qualitative data from the patient, (ii) another for the 
therapist consulting the patient's performance and adapting the exercises 
accordingly, and (iii) for the physician having a medical supervision of the 
recovery process. Second, it will be essential to develop a module that 
performs an automatic assessment and validation of the rehabilitation 
activities, in order to provide a real-time feedback to the patient regarding 
the correctness of the executed movements. Third, we also intend to make use of 
a serious game and affective computing approaches, with the intention of 
motivating the user to perform the exercises for a sustainable period of time. 
Finally, an ergonomic study will be carried out, in order to evaluate the 
usability of the system.

Full Project 
Description<http://artes.ucp.pt/enterface17/proposals/02.Final%20Proposal_Full_proposal_YR.pdf>





Big Brother can you find, classify, detect and track us ?
Principal investigators: Marc Décombas, Jean Benoit Delbrouck (TCTS Lab - 
University of Mons, Belgium)

In this project, we will build a system that can detect, recognize objects or 
humans and describe them as much as possible on video. Objects may be moving as 
well as the people coming in and out of the visual field of the camera(s). Our 
project will be split into three main tasks : detection and tracking, people 
re-identification, image/video captioning

The system should work in real time and should be able to detect people and 
follow them, re-identify them when they come back in the field and give a 
textual description of what each people is doing.

Full Project 
Description<http://artes.ucp.pt/enterface17/proposals/02.Final%20Proposal_BigBrother.pdf>





Networked Creative Coding Environments
Principal investigator: Andrew Blanton, Digital Media Art at San Jose State 
University

As a part of ongoing research Andrew Blanton will present a workshop using 
Amazon Web Servers for the creation of networked art. The workshop will 
demonstrate sending data from Max/MSP to a Unix based Amazon Web Server and 
receiving data into a p5.js via websockets. The workshop will explore the 
critical discourse surrounding data as a borderless medium and the ideas and 
potentials of using a medium that can have global reach .

Full Project 
Description<http://artes.ucp.pt/enterface17/proposals/02.FinalProposal_NCCE.pdf>







AUDIOVISUALY EXPERIENCE THROUGH IMAGE HOLOGRAPHY
Principal investigator: Maria Isabel Azevedo ( ID+ Research Institute for 
Design, Media and Culture, University of Aveiro), Elizabeth Sandford-Richardson 
( University of the Arts, Central Saint Martins College of Art and Design

Today in interactive art, there are not only representations that speak of the 
body but actions and behaviours that involve the body. In digital holography, 
the image appears and disappears from the observer's vision field; because the 
holographic image is light, we can see multidimensional spaces, shapes and 
colours existing on the same time, presence and absence of the image on the 
holographic plate. And the image can be flowing in front of the plate that 
sometimes people try touching it with his hands.
That means, to the viewer will be interactive events, with no beginning or end 
that can be perceived in any direction, forward or backward, depending on the 
relative position and the time the viewer spends in front of the hologram.
In this workshop we are proposing an audiovisual interactive installation 
composed by four digital holograms and spatial soundscape. When viewers move in 
front of each hologram, different sources of sound are trigger. The outcome 
will be presented in the last week of July with an invited performer. We are 
looking for sound designers and interaction programmers.
Keywords: Digital holographic image, holographic performance, sound 
spatialization, motion capture

Full Project 
Description<http://artes.ucp.pt/enterface17/proposals/02.FinalProposal_holo.pdf>


Study of the reality level of VR simulations
Principal investigators: Andre Perrotta, UCP/CITAR


We propose to develop a VR simulation based on 360o video, spatialized audio 
and force feedback using fans and motors, of near collision experiences of 
large vehicles on a first person perspective, to be experienced by users 
wearing head-mounted stereoscopic VR gear in a MOCAP (motion capture) enabled 
environment that enables a one-to-one relationship between real and virtual 
worlds.



............................................................

AVISO DE CONFIDENCIALIDADE

Esta mensagem (incluindo quaisquer anexos) pode conter informação confidencial 
ou legalmente protegida para uso exclusivo do destinatário. Se não for o 
destinatário pretendido da mesma, não deverá fazer uso, copiar, distribuir ou 
revelar o seu conteúdo (incluindo quaisquer anexos) a terceiros, sem a devida 
autorização. Se recebeu esta mensagem por engano, por favor informe o emissor, 
por e-mail, e elimine-a imediatamente. Obrigado.

............................................................

CONFIDENTIALITY NOTICE

This message may contain confidential information or privileged material, and 
is intended only for the individual(s) named. If you are not the named 
addressee, you should not disseminate, distribute or copy this e-mail. Please 
notify the sender immediately by e-mail if you have received this e-mail by 
mistake and delete this e-mail from your system. Thank you.
_______________________________________________
Pd-announce mailing list
[email protected]
https://lists.puredata.info/listinfo/pd-announce
_______________________________________________
[email protected] mailing list
UNSUBSCRIBE and account-management -> 
https://lists.puredata.info/listinfo/pd-list

Reply via email to