(Back to Session Schedule)

28th Picture Coding Symposium

Session K2  Keynote Speech 2
Time: 8:45 - 9:30 Thursday, December 9, 2010
Chair: Kiyoharu Aizawa (University of Tokyo, Japan)

K2-1 (Time: 8:45 - 9:30)
Title(Keynote Speech) Decoding Visual Perception from Human Brain Activity
AuthorYukiyasu Kamitani (ATR Computational Neuroscience Laboratories, Japan)
AbstractIn modern neuroscience, brain activity is considered as "codes" that encode mental and behavioral contents. Recent advances in human neuroimaging, in particular functional magnetic resonance imaging (fMRI), have revealed brain regions that appear to encode specific behavior and cognition. Despite the wide-spread use of human neuroimaging in such "functional brain mapping", its potential to read out, or "decode", mental contents from brain activity has not been fully explored. Mounting evidence from animal neurophysiology has revealed the roles of the early visual cortex in representing visual features such as orientation and motion direction. However, non-invasive neuroimaging methods have been thought to lack the resolution to probe into these putative feature representations in the human brain. In this talk, I present methods for decoding early visual representations from fMRI voxel patterns based on machine learning techniques. I first show how early visual features represented in "subvoxel" neural structures could be decoded from ensemble fMRI responses. Decoding of stimulus features is extended to the method for neural mind-reading, which attempts to predict a person's subjective state using a decoder trained with unambiguous stimulus presentation. We discuss how a multivoxel pattern can represent more information than the sum of individual voxels, and how an effective set of voxels for decoding can be selected from all available voxels. Based on these methods, we have recently proposed a modular decoding approach, in which a wide variety of percepts can be predicted by combining the outputs of multiple modular decoders. I show an example of visual image reconstruction where arbitrary visual images can be accurately reconstructed by the decoding model trained on fMRI responses to several hundred random images. Finally, I discuss potential applications of neural decoding for brain-based communications.