Our novel methodological tools offer a unique integration of diverse data (MEG, fMRI, convolutional neural networks, and behavior) in a common RSA framework, enabling a holistic description of human brain function. This is fundamentally different than any available tools to date, and will enable novel experimental paradigms to test hypotheses in perception and cognition in the human brain.
To apply these tools, we have focused our efforts in a critical research area: human visual recognition. A multistage distributed network of cortical visual pathways provides the neural basis for object recognition in humans. While the computations and tuning properties of low-level neurons have been investigated in detail, the precise neural computations transforming low-level features to mid- and high-level representations remain terra ingognita. By operationalizing representations as similarities across pairs of stimuli in an RSA framework, we study the hierarchical cascade of the human visual system in a systematic way.
The past five years have seen considerable progress in using deep neural networks to model responses in the visual cortex. Deep neural networks (DNNs) are now the most successful biologically inspired models of computer vision, making them invaluable tools to study the computations performed by the human visual system. Recent work has shown these models achieve accuracy on par with human performance in many tasks. We have also shown that computer vision models share a hierarchical correspondence with neural object representations.
DNNs have adopted a feedforward architecture to sequentially transform visual signals into complex representations, akin to the human ventral stream. Even though models with purely feedforward architecture can easily recognize whole objects, they often mislabel objects in challenging conditions, such as incongruent object-background pairings, or ambiguous and partially occluded inputs. In contrast, models that incorporate recurrent connections are robust to partially occluded objects, suggesting the importance of recurrent processing for object recognition.
To continue bridging the gap between human and computer vision, we explore how the duration and sequencing of ventral stream processes can be used as constraints for guiding the development of computational models with recursive architecture.
Combining multimodal data to capture an integrated view of brain function in representational space is a powerful approach to study the human brain and will yield a new perspective on the fundamental analysis of brain behavior and its neurophysiological underpinnings. The approach, termed representational similarity analysis (RSA), compares representational matrices (stimulus x stimulus similarity structures) across imaging modalities and data types. We are developing computational tools to link neural (MEG, fMRI); behavioral data (e.g. button presses, video camera data); and computational models (deep neural models - DNNs) using RSA.
The tools are exemplified in a novel computational method we recently developed, which fuses fMRI and MEG data, yielding a first-of its-kind visualization of the dynamics of object processing in humans. Intuitively, the method links the MEG temporal and fMRI spatial patterns by requiring stimuli to be equivalently represented in both modalities (if two visual stimuli evoke similar MEG patterns, they should also evoke similar fMRI patterns). To demonstrate this method, we captured the spatiotemporal dynamics of ventral stream activation of visual objects in sighted individuals in two independent data sets.
Our efforts concentrate on: a) methodological development of these tools, by extending MEG-fMRI fusion maps to experimental contrasts, derivation of statistical maps and thresholds, and optimization of spatio-temporal resolution; b) validation, by concretely demonstrating that a MEG-fMRI fusion approach can access deep neural signals which are very hard to localize with MEG alone; and c) efficient software implementations, by creating effective Matlab and GPU tools. In the long run, our goal is to expand the limits of imaging technologies by developing and popularizing computational tools that integrate the spatial and temporal richness of multi-modality data.
We use our novel methodological approaches to study the atypical brain. The ideas in pursuing this goal are exemplified in the following projects.
Functional reorganization of brain representations in blindness (Collaborators: Aude Oliva, Santani Teng): The human visual cortex does not fall silent in blindness. Substantial neuroimaging and neurological evidence has shown that visual cortex activation in blind individuals is functionally relevant for nonvisual tasks (e.g. Braille reading, verbal memory, and auditory spatial tasks). Yet, the nature of these computations and the governing principles of functional reorganization in blindness remain elusive. The two main theoretical frameworks proposed so far posit opposing hierarchical organizations of representations. The co-opted hierarchy framework suggests that the visually deprived cortex processes information in a consistent bottom-up hierarchy similar to its role in visual processing. In contrast, the reverse hierarchy framework predicts that early visual cortex receives high-level content at the end of the processing cascade, with the processing hierarchy reversed compared to the typical brain. The overall objective of this project is to disambiguate between these two theoretical frameworks by constructing a finely resolved picture of the sensory processing cascade in blind persons. We will use the MEG-fMRI fusion method that will allow for the first time to capture the hierarchical neural cascade of Braille processing in blind individuals.
Variability in the auditory-evoked neural response as a potential mechanism for dyslexia (Collaborators: John Gabrieli, Tracy Centanni): The goal of this project is to investigate the role of neural variability in dyslexia. In particular, we explore whether trial-by-trial neural variability differs in the auditory and/or visual cortex of children with dyslexia when compared to neurotypical children. Preliminary results indicate the autism condition is associated with decreased consistency in the neural response for both auditory and visual stimuli.
Sensitivity to speech distributional information in children with autism (Collaborators: John Gabrieli, Zhenghan Qi): This project investigates whether children with autism spectrum disorder are sensitive to probability cues in speech. In typical language acquisition literature, ample evidence suggests neurotypical children are exquisitely poised to capture the distributional information embedded in speech, to learn various aspects of phonotactic and syntactic rules. Children with autism, however, demonstrate impaired performance in such tasks. We use an auditory mismatch paradigm (syllables ‘ba’ and ‘da’ delivered with different probabilities) to detect deficits in probabilistic learning. Preliminary findings have revealed that impaired reading skills in autism are associated with atypical sensitivity to frequency of syllables.