“When the systems approach can be connected to the mechanisms approach so that its feedback loop and automata becomes clothed in flesh and blood, we shall see real and exciting progress in understanding behaviour.” ~ (Kenneth Roeder)
### The problem
Behaving organisms are complicated. Even an animal like the Placozoan Trichoplax adhaerens. That thing is just a two dimensional mat of cells, measuring in at a millimetre-wide (Smith et al. 2014). And yet, it has quite a breadth of discrete behaviours despite its lack of nervous system. How they come about, is not totally understood, though it is something to do with neuropeptides and each cell’s location within the mat (Varoqueaux et al. 2018). Of course, to move a larger mass of cells into ‘behaviours’ a nervous system is required. A complex behaviour requires many neurons to be organised into particular layouts. Together with the integrative properties of their constituent neurons, these architectures allow various sensory information streams to be transformed, salient features extracted and compared in the face of noise, and the motor output to be arranged. In this way behaviour is guided and the nature of incoming sensory information is steered towards some set point (Powers 1989). Trying to understand how nervous systems do this at detail though, is very difficult.
Indeed, an artificial neural network can be initialised, trained and executed on a dataset in a few lines of code in a high-level programming language. Despite the fact that the computer scientist can make, conceptually, any manipulation that experimental neuroscientists make and far more, it can be very difficult for them to investigate what features their artificial neural networks care about or why the final network works. The important bit is a basic understanding of cost functions, network architectures, learning rules and evaluation metrics, even if one only knows which are best from trial and error (Lillicrap and Kording 2019). Investigating individual artificial neurons seems a little mad. It’s maybe mad in neurobiology too. Consider, that a cortical cell has 10^4 connections that you could probe (though most of these connections are functionally weak) (Holmgren et al. 2003). And that one neuron can cause a lot of chaos. Just a couple of spikes from a single pyramidal neuron in a cortical structure results in the controlled, sequential activation of dozens of neurons over a period of ~200 ms, spread over ~1 mm^2 of tissue (Hemberger et al. 2019). Not to mention all the funky things neurons can do as a product of their molecular and cellular biology.
However, it’s also pretty clear that approaches which take into account specific behaviours and try to pin them to the structure of specific neural circuits can help narrow the space for plausible algorithms that the brain uses to generate behaviour.
It is far easier to achieve this in small animals, not least because for creatures the size of C. elegans, connectomics can yield a full inventory of neurons and connections, which helps guide specific experiments and build circuit neuroscience, mechanistic models. The most complex model organism in the EM-able range with a powerful genetic toolkit already available for the interventionist neurobiologist is the adult vinegar fly, D. melanogaster, weighing in at ~150,000 neurons.
In the fly, a single neuron might connect with a hundred or so downstream and upstream partners, of which maybe only a dozen or so connections appear to be strong. Here is an example of a neuron for which I have done a full upstream reconstruction of its partners:
How might we try to understand the fly? It has been argued that a student understands a theory if they can use it to produce qualitative solutions to conceptual problems. But depending on what we settle on as being an ‘adequate’ solution, we get different resolutions, ‘depths,’ of understanding. For me, understanding a neuronal circuit would be the ability to point at any cell type in its wiring diagram, and be able to say why it is there, and how it is different from other types in the same diagram. Thus is not completely objective process of course, but it is a somewhat self-calibrating understanding for nervous systems
To understand the D. melanogaster nervous system, we will need to reduce its dimensionality by distilling it into developmental lineages, cell types and type-type connection motifs which may constitute an ‘intermediate language’ by which to describe how this nervous system is built to generate behaviour. Rowing back to the artificial neural networks comparison, this is close to understanding modules of neurons and arranging them into the best architectures for the task at hand. Looking at this assembly and designing experiments to ask what sub-parts of it do, gets us closer to understanding those bits of it, as well as things that look similar. Those things might be in other organisms or part human-made technologies. We need to be mindful too, of asking what wetware these neural circuits run and how they were built, just as the software and design constraints behind your computer decide its performance margins.
This is perhaps part of why the term connectomics has become quite popular of late (data from pubmed):
Though a good understanding of the neuroanatomy of an animal needs to be complemented by a good understanding of the behavioural repertoire of that animal, before we can really get to work on cracking the whole thing. Indeed, The current emphasis on neuronal circuits and their manipulation might be best served by bookending our understanding with better neuroanatomy to one side, and a better understanding of naturalistic behaviours on the other.
Let us consider again the nerve-less T. adherens. T. adherens clearly computes but lacks any neuronal circuits. To understand it, we could video-record individuals behaving in response to stimuli that an experimenter controls. By using a toolkit that localises genetic engineering, physical ablation, chemical and/or electrical stimulation to individual cells or cell types, neurons can be removed or activated and a mechanistic understanding of their role in the behaviour obtained (figure A).
Alternatively, we could work in reverse (figure B). One could localise gene expression using correlative light and electron microscopy and so bridge single-cell transcriptome atlases (Sebé-Pedrós et al. 2018) to physical locations in a T. adherens. Knowing now the complete cellular organisation of the animal, and being able to have a good guess at the action of any cell upon any other based on their gene expression, we might hypothesise the sort of behaviour different sets - ‘networks’ - of cells could be involved with and use the same toolkit to see whether or not we were correct.
Which approach is ‘better?’ The answer is not obvious. This is partly because, in the case of T. adherens, the behavioural repertoire and cell diversity are both small, even if the algorithm that connects them is still not so simple. Both routes are therefore equally plausible with modern investigative means. However, for nervous systems it is generally thought that it is more difficult to infer from the nervous system’s wetware, the algorithm employed to achieve some specific behaviour (figure A) than the reverse (figure B) (Krakauer et al. 2017).
The mouse Mus musculus brain has about 75 million neurons and is far more behaviourally complex and cognitively able than T. adherens. An investigator’s initial intervention may connect, say, a specific neuron’s activity to a well-defined behaviour or percept in the sense that the two correlate. It is easier to use the modern neuroanatomical toolbox to find some information about that neuron’s class’ wider circuit (figure A), than it is to start with an interesting looking circuit assembly, and assay a provocative neuron class’ action in behaviour (figure B). This is partly because investigating neuroanatomy involves very similar tools (immunohistochemistry, dye-filling, transgene expression by genetic driver lines, biological trans-synaptic tracing systems, electron microscopy) which reliably provide different resolutions of the answer at a correlated expenditure of resources, whereas assaying behavioural space is more challenging without strong leads. We do not know the full fine-grained behavioural repertoire of the majority of model organisms and building a battery of behavioural assays in the laboratory to explore this space for any given neuron is laborious.
The systems neuroscience view has held that cognitive phenomena can be understood by building models that transform receptive fields for single neuron recordings at different levels of a sensory system until empirical higher-level representations can be successfully simulated (Yuste 2015). A more medium but mechanistic level of understanding can be gained from establishing causal links between bits of the nervous system (Olsen and Wilson 2008; Yuste 2015). The fewer behavioural states in which a neuron class is active, and the stronger the correlation with those states, the easier it can be to implicate a neuron type in some behavioural computation. One simple example is the search for ‘command’ neurons that gate a behaviour. However, command neurons thought to herald particular motor programmes can actually start several distinct programmes and be involved in others, depending on the activity context of the rest of the nervous system (Esch and Kristan 2002). In some cases, the relative strength of ‘command neuron’ [Kupfermann1978-ou] activation can result in very different behavioural outcomes (Ding et al. 2019). More generally, even in numerically simple nervous sub-systems, the same behaviour can be produced by a large number of plausible neurobiological circuits (Marder and Goaillard 2006), and the same neurobiological circuit can produce multiple different behaviours depending on its state, past history and present modulation (Katz Paul S. 2016).
The current emphasis on neuronal circuits and their manipulation might be best served by bookending our understanding with better neuroanatomy to one side, and a better understanding of naturalistic behaviours on the other. The path from one end, then, is to describe neurons, then cell types, then networks and then entire nervous systems, and extract circuits from these data. From the other, it is to describe a large repertoire of behaviours and identify their neuronal substrates. By combining the two workflows in a ‘pincer’ move (figure C), we can add the ‘function’ of modules or single components, in the behavioural sense, to a wiring diagram that has been the product of rigorous neuroanatomy. What we need in order to make this work are different ways to be reductionist. One way is to take a completionist attitude to a sub-system in a small nervous system (figure D).