Abstract
We develop computational and experimental methods to gain insights into visual functions and psychiatric disorders. We also build deep learning models that predict human behaviors and identify people with disorders. In this talk, I will share our recent innovations on data and models, aiming at understanding and predicting visual attention in natural scenes.
I will first introduce our new approach to characterize complex scenes with rich semantics. It allows quantifying behavioral differences of multiple clinical groups. As an example, I will elaborate findings that use data and models to decipher the neurobehavioral signature of autism. I will then demonstrate an innovative psychophysical method to enable large-scale collection of attention data. I will also present our deep learning models that are able to learn semantic and sentimental attributes from complex natural scenes, leading to breakthrough performance in attention prediction and identifying people with autism.