reference: ARIADNE project, David Cox, Harvard, 2016

David Cox is a neuroscientist and computer vision / machine learning researcher at Harvard University. I'm interested in understanding how our brains enable us to understand the torrent of information that we receive through our senses. (more on: http://davidcox.me/ http://www.coxlab.org/)

Cox leads a project called Ariadne, that is looking for clues in mammalian brains to make software smarter (https://www.technologyreview.com/s/602627/searching-rat-brains-for-clues-on-how-to-make-smarter-machines/).

photo from Coxlab link: http://www.coxlab.org/images/gpu_box_banner.jpg

Biologically-Inspired Artificial Vision : CoxLAb team is interested in reverse engineering biological visual systems. Using of the rats brain as biological neural network opens field of research on the he brains as part of more understanding and self learning AI and machines.


articles: https://scholar.google.com/citations?user=6S-WgLkAAAAJ&hl=en

This project is not only pushing the boundaries of brain science, it is also pushing the boundaries of what is possible in computer science,” said Pfister. “We will reconstruct neural circuits at an unprecedented scale from petabytes of structural and functional data. This requires us to make new advances in data management, high-performance computing, computer vision, and network analysis.”

If the work stopped here, its scientific impact would already be enormous — but it doesn’t. Once researchers know how visual cortex neurons are connected to each other in three dimensions, the next task will be to figure out how the brain uses those connections to quickly process information and infer patterns from new stimuli. Today, one of the biggest challenges in computer science is the amount of training data that deep-learning systems require. For example, to learn to recognize a car, a computer system needs to see hundreds of thousands of cars. But humans and other mammals don’t need to see an object thousands of times to recognize it — they only need to see it a few times.
source: http://news.harvard.edu/gazette/story/2016/01/28m-challenge-figure-out-why-brains-are-so-good-at-learning/

Recent progress in tasks such as image recognition and translation sprang from putting more computing power behind a technique known as deep learning, which is loosely inspired by neuroscience. But Cox points out that despite giving us better speech recognition and mastering the game of Go, this software still isn’t very smart.

For example, it’s easy to modify photos so that deep-learning software sees things that aren’t there. Cox showed a photo of MIT Technology Review’s editor in chief subtly altered to appear to image recognition software as an ostrich. (You can try this trick yourself using this online demo from Cox’s lab.)

Cox also highlighted how the software requires thousands of labeled examples to recognize a new type of object. Human children can learn to recognize a new object, such as a new type of tool, with a single example.

Cox said that looking more closely at brains is the best way to address those shortcomings. “We think there’s still something brains can offer beyond this initial loose inspiration,” he said.

The Ariadne project only started in January, but Cox is already challenging rats with video games designed to exercise their visual recognition skills. The researchers use newly developed microscopes to watch the activity of cells in the brain and try to figure out how the neurons interpret the world.

“This is like a wiretap on a huge number of cells in the brain; you’re watching the rat have a thought,” said Cox. “We can ask unprecedented questions about how the brain is doing computation.”

Another strand of the project involves trying to reconstruct the connectivity and structure of neurons in rat brains in 3-D, using stacks of 30-nanometer slices of brain tissue processed with an electron microscope.

The 3-D models that emerge are fiendishly complex. Neuroscientists still don’t know what all the different cells do. But Cox says their bewildering intricacy is encouraging, because it suggests brains can still teach us much more about how to build artificial intelligence.

“This is one of those smoking guns—there are way more things going on,” he said. “Our hope is that by understanding this, we can make deep-learning systems that are closer to what the brain does
source: https://www.technologyreview.com/s/602627/searching-rat-brains-for-clues-on-how-to-make-smarter-machines/


BUILDING MACHINES THAT UNDERSTAND
A group of neuroscience, computer vision, and machine learning experts from Harvard building the next generation of AI. https://perceptiveautomata.com/

Comments