Multisensory Processing at MEL

Date:

Recent work at the Multisensory Experience Lab pertaining the topic with physics-based audio and movement models beyond deep learning. Hints what’s to come: graphical and physics-based deep learning. Click to learn more about our viewpoint and to watch the talk.

In the Multisensory Experience Lab we investigate the combination of different input and output modalities in interactive applications. We are interested in both development of novel hardware and software technologies, as well as evaluation of user experience. We apply our technologies in a variety of areas such as health, rehabilitation, education, art and entertainment. We are particularly interested in researching topics related to sonic interaction design for multimodal environments, simulating walking experiences, sound rendering and spatialization, haptic interfaces, cinematic VR and evaluation of user experience in multimodal environments.

MEL traditionally relies on physics-based audio and haptic models for multisensory processing. Currently we focus on physics-based deep learning and differentiable programming for constructing models or estimating the model parameters. We advocate that this interpretable and explainable approach to machine learning will in time solve the fundamental problems of both deep learning and multisensory processing, by highlighting reasoning instead of mere mapping, and focusing on optimizing well-understood classical signal processing algorithms by inductive, structural biases instead of a generic black-box approach.

More info at https://danishsoundcluster.dk/multisensory-processing. Watch the talk: