What follows are some questions PLOS CB asked us to answer for a more general audience.
There is tremendous excitement in neuroscience about new experimental techniques and analytic methods under development. Scientists are recording from more neurons across more organisms than we would have dreamed possible a decade ago, and the Obama Administration’s BRAIN Initiative and projects at DARPA are trying to push that number to a million simultaneous neurons.
Making sense of such data is difficult since we still don’t really understand how even (comparatively) simple neural systems like the brain of a fruit fly work. Scientistis currently lack a comprehensive understanding of how visual input presented to a fly will be processed by the fly’s brain to give rise to behavior. This makes it incredibly difficult to test our analysis algorithms -- we don’t know if the algorithms will be useful, because we don’t have the data, and we don’t know if the data will be useful, because we can’t test the algorithms!
Our study attempts to sidestep this issue by applying a large number of classical analysis techniques to a computing system that we do understand: A microprocessor from a classic video game system, the Atari 2600. Since humans have designed this processor from the transistor all the way up to the software, we know how it works at every level, and we have an intuition for what it means to “understand” the system. Our goal was to highlight some of the deficiencies in “understanding” that arise when applying contemporary analytic techniques to big-data datasets of computing systems.
We are inspired by the work of Yuri Lazebnick,who in 2003 asked, “Could a biologist fix a radio?” He was critical of the aggressively reductionistic approach he saw in cancer biology at the time.
Today’s experimental paradigms when paired with today’s data analysis algorithms fall short of a meaningful description of how the microprocessor works.
Without careful thought, current big-data approaches to neuroscience may not live up to their promise or succeed in advancing the field. In particular, we have a long way to go to identify
It’s obvious that we need new, better, more scalable data analysis algorithms with interpretable results to accelerate the pace of neuroscience. Our view is that this is not controversial -- many scientists in the field are working on it. But we argue there are two additional directions to pursue:
By checking how well experimental strategies and analysis algorithms from neuroscience work at understanding microprocessor, we can identify potential hurdles for neuroscience. The methods we use to understand these systems can perhaps guide our approaches when trying to understand a brain.
Our paper generated a lot of attention when it was first circulated among scientists, and many raised good criticisms. By releasing a preprint version of the paper, we were able to solicit feedback from the entire community, and ultimately address many of these criticisms, which include:
Processors are not brains. Microprocessors and biological systems are incredibly different. Individual cells in the brain operate literally a thousand times more slowly than the transistors in the microprocessor we used. Our processor had only 3000 transistors, whereas a mouse brain has 100 million neurons, and each neuron likely does the work of thousands of transistors. Brains tend to be much more robust to damage than processors.
In the paper we argue that these criticisms are all valid, and are important in considering the impact of the work. But we argue that many of these features, including a vastly smaller number of simpler units, should make processors much easier to reverse-engineer than biological systems.
Differing degrees of functional localization. This also relates to an important difference between brains and processors. Many neural systems have a great deal of “functional localization,” where we know that specific areas are responsible for specific tasks or behaviors. In mammalian brains, for example, we know various parts of visual cortex are responsible for processing visual input. While processors like the one we examined also exhibit some functional localization (one part of the chip adds numbers, another part keeps track of inputs and outputs), it seems less advanced or intricate than what we see in biological systems. We completely agree that there are vastly differing levels of functional localization in these systems, and indeed systems which exhibit greater functional localization may be easier to reverse-engineer. But while a great amount is known about functional localization in neural systems, many of the most interesting parts of the brain -- such as prefrontal cortex, where a lot of higher-order cognitive processes are speculated to occur -- are largely undifferentiated collections of cells.
There are many brand-new algorithms for understanding data that the authors did not test. Neural data analysis is a rapidly advancing field, with new techniques being published all the time. Indeed, we develop some of these techniques in our own research. While many of the techniques we applied to the microprocessor are well-established, in some cases decades old, , they are still widely used by neuroscientists in the field to analyze new data.. The other problem we wished to highlight is that many new data analysis methods aren’t scalable -- they don’t run fast enough on large datasets that are now being generated by practitioners. While our processor is “small” compared to brains, it is still too large and complex for many recently developed analysis techniques to handle. We believe that the community will need to address these questions of scale before meaningful comparison with these new methods can occur.