Despite significant advances in neuroscience over the recent years, the complexity of neural activity has defied efforts to formulate a comprehensive understanding of the human brain. Progress towards this goal has been hindered by technological and methodological constraints in accessing brain function, motivating recent efforts to develop and apply new technologies. A prominent barrier is the limitation of current non-invasive brain imaging technologies, which enable access to either high temporal information (MEG/EEG), high spatial information (fMRI), but not both simultaneously. However, the field is now transforming: multivariate pattern analysis tools are ubiquitous in fMRI and are becoming increasingly popular in MEG/EEG. The introduction of modern machine learning algorithms to decipher information from ongoing neuronal processes has drastically improved the quality of extracted neural signals. This is the central theme of our research, which focuses on novel methodology for discerning neural representations from MEG data, development of multimodal imaging techniques, and characterization of pathological function in neurological disorders. The group follows the following main research lines: i) Resolve the neural substrate that supports human visual recognition; and ii) Characterize pathologic function in the atypical brain, Alzheimer's disease, and other neurological disorders.
Dimitrios Pantazis is the director of the MEG lab at MIT. The MEG lab is part of the Martinos Imaging Center at MIT, operates as a core facility, and is accessible to all members of the local research community.
Alzheimer's disease is a network-based disease affecting large-scale brain systems involving the medial temporal and heteromodal cortices, making these networks highly promising, quantitative, disease biomarkers. We have been developing deep learning algorithms tuned to brain network analyses (graph convolutional networks). These algorithms, called graph convolutional networks are the generalization of convolutional neural networks to graph-structured (network) datasets, akin to MEG connectivity networks in Alzheimer's disease. Our novel architecture that combines MEG functional connectivity with PET molecular-level networks automatically learns internal features that predict the risk of progression from clinically normal to prodromal Alzheimer's disease.
In a Nature Communications article, we resolved the time course of face processing in the human brain with MEG. We found that facial gender and age information emerged before identity information, suggesting a coarse-to-fine processing of face dimensions. We also found that identity and gender representations of familiar faces were enhanced very early on, suggesting that the behavioral benefit for familiar faces results from tuning of early feed-forward processing mechanisms.
Our novel MEG-fMRI fusion technique, published in Nature Neuroscience, enables a unique view of human brain function with millisecond-millimeter resolution. Using this method, we produced a first-of-its-kind movie revealing the activation cascade of the human ventral visual pathway. See MIT press release, and a related article discussing the method.
Brainstorm software is an open-source environment dedicated to the analysis of brain recordings (MEG, EEG, NIRS, ECoG, depth electrodes, animal electrophysiology) with 13,000+ registered users and 400+ related publications. Dimitrios Pantazis is a key collaborator in the development team, with major contributions in time-frequency analysis tools, labeling of cortical surfaces, statistical analysis of cortical activation maps, multivariate pattern analysis, and machine learning.
Our MEG study investigating auditory space perception has caught the attention of Science News, UK Daily Mail, and APS (Association for Psychological Science)! The study provides the first neuromagnetic evidence for a robust auditory space size representation in the human brain.
In a series of empirical experiments, we showed that neural signals at the level of cortical orientation columns (~800 μm) are accessible by MEG measurements in humans! Our work has been highlighted in a spotlight article in Trends in Cognitive Sciences, which even describes as a 'game changer' the notion that our research suggests MEG contains rich spatial information for decoding neural states.
Artificial deep neural networks (DNNs) are so powerful computer vision models they now yield human performance levels on object categorization. By comparing a DNN tuned to the statistics of real world visual recognition with temporal (MEG) and spatial (fMRI) visual brain representations, we showed that the DNN captured the stages of human visual processing from early visual areas towards the dorsal and ventral streams. Our results provide an algorithmically informed view on the spatio-temporal dynamics of visual object recognition in the human visual brain.
The MEG Lab will have a strong presence in the Vision Science Society 2017 meeting. Including the MODVIS satellite workshop, we will have a keynote talk, 5 research talks, and 2 posters!