Despite significant advances in neuroscience over the recent years, the complexity of neural activity has defied efforts to formulate a comprehensive understanding of the human brain. Progress towards this goal has been hindered by technological and methodological constraints in accessing brain function, motivating recent efforts to develop and apply new technologies. A prominent barrier is the limitation of current non-invasive brain imaging technologies, which enable access to either high temporal information (MEG/EEG), high spatial information (fMRI), but not both simultaneously. However, the field is now transforming: multivariate pattern analysis tools are ubiquitous in fMRI and are becoming increasingly popular in MEG/EEG. The introduction of modern machine learning algorithms to decipher information from ongoing neuronal processes has drastically improved the quality of extracted neural signals. This is the central theme of our research, which focuses on novel methodology for discerning neural representations from MEG data, and development of multimodal imaging techniques. The group follows three main research lines:
Our novel MEG-fMRI fusion technique, published in Nature Neuroscience, enables a unique view of human brain function with millisecond-millimeter resolution. Using this method, we produced a first-of-its-kind movie revealing the activation cascade of the human ventral visual pathway. See MIT press release, and a related article discussing the method.
Brainstorm software is an open-source environment dedicated to the analysis of brain recordings (MEG, EEG, NIRS, ECoG, depth electrodes, animal electrophysiology) with 13,000+ registered users and 400+ related publications. Dimitrios Pantazis is a key collaborator in the development team, with major contributions in time-frequency analysis tools, labeling of cortical surfaces, statistical analysis of cortical activation maps, multivariate pattern analysis, and machine learning.
Our MEG study investigating auditory space perception has caught the attention of Science News, UK Daily Mail, and APS (Association for Psychological Science)! The study provides the first neuromagnetic evidence for a robust auditory space size representation in the human brain.
In a series of empirical experiments, we showed that neural signals at the level of cortical orientation columns (~800 μm) are accessible by MEG measurements in humans! Our work has been highlighted in a spotlight article in Trends in Cognitive Sciences, which even describes as a 'game changer' the notion that our research suggests MEG contains rich spatial information for decoding neural states.
Artificial deep neural networks (DNNs) are so powerful computer vision models they now yield human performance levels on object categorization. By comparing a DNN tuned to the statistics of real world visual recognition with temporal (MEG) and spatial (fMRI) visual brain representations, we showed that the DNN captured the stages of human visual processing from early visual areas towards the dorsal and ventral streams. Our results provide an algorithmically informed view on the spatio-temporal dynamics of visual object recognition in the human visual brain.
The MEG Lab will have a strong presence in the Vision Science Society 2017 meeting. Including the MODVIS satellite workshop, we will have a keynote talk, 5 research talks, and 2 posters!