Interdisciplinary PhD scholarship in Computational Modeling of Visual Communication

Computational analysis and simulation of non-manual communicative signals in sign languages and multimodal communication.

This PhD project will be part of the larger VSLT project which aims to achieve a breakthrough in the computational analysis and simulation of visual communication, with a special focus on sign languages (SLs), in which communication is entirely visual, as well as multimodal communication, in which speech typically plays a primary role but visual cues often have an important supporting role. The highly interdisciplinary project involves researchers in computer vision and machine learning, in linguistics specialised in multimodal communication and sign language, and in computer graphics.

The PhD project will focus on non-manual communicative signals such as facial expressions, eye gaze direction, head movements, and body poses. It will create datasets rich in non-manual communicative signals both for Catalan Sign Language (LSC) and for spoken Catalan, developing machine learning based annotation, creating predictive models of the use of non-manual communicative signals and extending the generative capacity of conversational signing avatars by these means. The research may also include comparison with models of visual communicative signals in Sign Language of the Netherlands (NGT) and in spoken Dutch, in collaboration with researchers at the University of Amsterdam.

This position is co-funded by the PhD fellowship program of the Department of Information and Communication Technologies at Universitat Pompeu Fabra (DTIC-UPF), and the María de Maeztu Strategic Research Programme at DTIC-UPF on Artificial and Natural Intelligence for ICT and beyond (CEX2021-001195-M).