Prof. Jack Gallant is the Chancellor's Professor of Psychology at the University of California at Berkeley, USA. His research programme focuses on computational modelling of the human brain. These models accurately describe how the brain encodes information during complex, naturalistic tasks, and they show how information about the external and internal world are mapped systematically across the surface of the cerebral cortex. These models can also be used to decode information in the brain in order to reconstruct mental experiences. - also: http://gallantlab.org/
The human brain is the most sophisticated image processing system known, capable of impressive feats of recognition and discrimination under challenging natural conditions. Reverse-engineering the brain might enable us to design artificial systems with the same capabilities. The Gallant laboratory uses a data-driven system identification approach to tackle this reverse-engineering problem. Our approach consists of four broad stages. First, we use functional MRI to measure brain activity while people watch movies. We divide these data into two parts, one use to fit models and one for testing model predictions. Second, we use a system identification framework based on multiple linearizing feature spaces to model activity measured at each point in the brain. Third, we inspect the most accurate models to understand how the brain represents structural and semantic information in the movies. Finally, we use the estimated models to decode brain activity, reconstructing the structural and semantic content in the movies. This framework could form the basis of practical new brain reading technologies, and can inform development of biologically-inspired computer vision systems
(also different articles, via: http://gallantlab.org/index.php/press/)
Using Hollywood movie trailers, UC Berkeley researchers have succeeded in decoding and reconstructing people's dynamic visual experiences.
The brain activity recorded while subjects viewed a set of film clips was used to create a computer program that learned to associate visual patterns in the movie with the corresponding brain activity. The brain activity evoked by a second set of clips was used to test the movie reconstruction algorithm. This was done by feeding 18 million seconds of random YouTube videos into the computer program so that it could predict the brain activity that each film clip would most likely evoke in each subject. Using the new computer model, researchers were able to decode brain signals generated by the films and then reconstruct those moving images.
Eventually, practical applications of the technology could include a better understanding of what goes on in the minds of people who cannot communicate verbally, such as stroke victims, coma patients and people with neurodegenerative diseases. It may also lay the groundwork for brain-machine devices that would allow people with cerebral palsy or paralysis, for example, to guide computers with their minds.
The lead author of the study, published in Current Biology on September 22, 2011, is Shinji Nishimoto, a post-doctoral researcher in the laboratory of Professor Jack Gallant, neursoscientist and coauthor of the study. Other coauthors include Thomas Naselaris with UC Berkeley's Helen Wills Neuroscience Institute, An T. Vu with UC Berkeley's Joint Graduate Group in Bioengineering, and Yuval Benjamini and Professor Bin Yu with the UC Berkeley Department of Statistics. (also http://news.berkeley.edu/2011/09/22/brain-movies/)
The human brain is the most sophisticated image processing system known, capable of impressive feats of recognition and discrimination under challenging natural conditions. Reverse-engineering the brain might enable us to design artificial systems with the same capabilities. The Gallant laboratory uses a data-driven system identification approach to tackle this reverse-engineering problem. Our approach consists of four broad stages. First, we use functional MRI to measure brain activity while people watch movies. We divide these data into two parts, one use to fit models and one for testing model predictions. Second, we use a system identification framework based on multiple linearizing feature spaces to model activity measured at each point in the brain. Third, we inspect the most accurate models to understand how the brain represents structural and semantic information in the movies. Finally, we use the estimated models to decode brain activity, reconstructing the structural and semantic content in the movies. This framework could form the basis of practical new brain reading technologies, and can inform development of biologically-inspired computer vision systems
Using Hollywood movie trailers, UC Berkeley researchers have succeeded in decoding and reconstructing people's dynamic visual experiences.
The brain activity recorded while subjects viewed a set of film clips was used to create a computer program that learned to associate visual patterns in the movie with the corresponding brain activity. The brain activity evoked by a second set of clips was used to test the movie reconstruction algorithm. This was done by feeding 18 million seconds of random YouTube videos into the computer program so that it could predict the brain activity that each film clip would most likely evoke in each subject. Using the new computer model, researchers were able to decode brain signals generated by the films and then reconstruct those moving images.
Eventually, practical applications of the technology could include a better understanding of what goes on in the minds of people who cannot communicate verbally, such as stroke victims, coma patients and people with neurodegenerative diseases. It may also lay the groundwork for brain-machine devices that would allow people with cerebral palsy or paralysis, for example, to guide computers with their minds.
The lead author of the study, published in Current Biology on September 22, 2011, is Shinji Nishimoto, a post-doctoral researcher in the laboratory of Professor Jack Gallant, neursoscientist and coauthor of the study. Other coauthors include Thomas Naselaris with UC Berkeley's Helen Wills Neuroscience Institute, An T. Vu with UC Berkeley's Joint Graduate Group in Bioengineering, and Yuval Benjamini and Professor Bin Yu with the UC Berkeley Department of Statistics. (also http://news.berkeley.edu/2011/09/22/brain-movies/)
Comments
Post a Comment