We study how our senses can provide meaningful information about the world and how our brain is able to integrate these perceptions and produce actions in response to the things happening around us.
Problem
Our senses are limited: for example, we can see acutely only with a small part of the eye, and we cannot directly hear where sounds are coming from with the inner ear. From this incomplete information, our brain needs to decide what and where events and objects of interest are, so that we can act on them accordingly. Unfortunately, this task is harder in crowded, noisy environments (for example, in a bar) and when senses are impaired (for example, due to sensorineural hearing loss).
Understanding what strategies humans use to integrate (multi)sensory percepts is not trivial. The neural mechanisms and computations behind multisensory integration and sensorimotor control are hidden within the nervous system.
Approach
We approach this problem through a systems neuroscience point of view using psychophysical, modeling and neuroimaging techniques. We typically regard the system under study as a black box, whose behavior can be understood by the inputs (stimuli) and the outputs (responses) to the system. To that end, we study the full action-perception cycle in unique, psychophysical sensorimotor setups. These labs allow us to study mechanisms underlying multisensory integration methods in humans (including patients with sensorimotor impairments) sensing and acting in complex scenes that can emulate realistic environments. We focus on the auditory, visual, and vestibular systems, studying:
- speech perception in the free-field or over headphones
- sound localization with the subject seated in the center of a hemisphere of 128 loudspeakers
- sound-motion pursuit with a robotic arm that can move a loudspeaker around the subject, while the subject is seated on a rotating chair
- spatial updating and multisensory integration in a vestibular apparatus able to rotate the subject in 2 dimensions simultaneously
All our labs are silent, dark, and anechoic, and equipped with stimulus generation (LEDs, speakers, TDT), and gaze, head and eye data acquisition tools (PupilLabs, magnetic search coil, TDT). We have fNIRS (Artinis) and EEG devices (tMSI) to enable neuroimaging studies, and have sensitive microphones (ACOPacific, BinauralEnthusiast, Etymotic) to record (binaural) sounds. We are actively developing tools to perform experiments and tests online or at home.
We also use our labs and paradigms to understand the function of the brain when the senses or motor control are impaired. We hope to contribute to the diagnosis, treatment and counseling of individuals suffering from these impairments. To study individuals with hearing impairments (including the deaf using cochlear implants), we are actively collaborating with clinics (Hearing and Implants group, ENT department of the RadboudUMC, University of Miami) and hearing device companies (Advanced Bionics, Cochlear). Recently, we have started to study eye motor control in Multiple Sclerosis patients (in collaboration with Noldus Technologies).