Alternative methods

Like most scientists, we embrace alternative methods and employ them in our research whenever possible. Currently available non-invasive methods, however, have their limitations and cannot fully replace animal research.

The methods most frequently propagated by animal research opponents include: in vitro studies, microdosing, computer simulations and functional MRI. In the following we will take a closer look at each of these methods and their role in cognitive brain research.  

In vitro experiments
Instead of using living animals, certain experiments can be carried out on tissue samples in a test tube (Latin in vitro, literally “in glass”). However, these preparations cannot regenerate, so they must be acquired by killing animals. Legally, killing an animal to obtain tissue samples is not considered an animal experiment, even if the animal in question is a vertebrate. In contrast, it is regarded as an animal experiment to anesthetize a vertebrate, make observations while the animal is under anesthesia and then to kill the animal by increasing the dose of anesthetic. If good anesthesia practices are employed, the animal will not suffer in either case.  

The replacement of in vivo (i.e. living) experiments with in vitro methods does not reduce the number of research animals that are killed. On the contrary, the limited survival time of brain slices, for example, restricts the amount of data that can be obtained from a single experiment.


Microdosing
A ‘microdose’ is defined as less than one hundredth of the proposed pharmacological dose up to a maximum of 100 µg. Microdoses of drugs can be measured in any biological sample such as plasma or urine to determine how they are absorbed, distributed, metabolized and excreted (ADME). The analysis is carried out using an accelerator mass spectrometer (AMS). AMS is the most sensitive analytical tool available and is used to study samples from humans, allowing early metabolism data to be obtained before going into human phase 1 trials. By conducting human phase 0 microdosing trials, drug candidates can be efficiently tested right in the relevant species.  

Animal research opponents claim that this ultrasensitive analytical technique allows greater predictability than animal studies and reduces the preclinical testing time from 18 months to 6 months. Unfortunately, this particular method is simply not applicable to the neuroanatomical and physiological studies carried out at our institute.


Computer simulations
Animal research opponents also claim that computer simulations can replace animal experiments. The assumption is that properties of real brains can be inferred from the analysis of artificial neural networks. Unfortunately, this is completely unrealistic, firstly because of the problem of instantiation. Computer simulations themselves teach us that similar functions can be realized by quite different hardware implementations and processing algorithms.  

Moreover, it is unclear what kind of “replacement” computers are supposed to be and indeed whether animal experiment opponents really understand computers and computer simulations of neural networks. How can a computer possibly replace recordings from a brain site, for instance? Today’s computers, with the existing hardware and operating principles and the hopelessly primitive algorithms and simulations cannot even come close to simulating even the most primitive sensory pathway in a very simple system. They certainly cannot be a substitute for even a small neural population, say, in cortex.  

Computer simulations do actually yield acceptable models in research on diabetes, asthma, and drug absorption, although potential new medicines identified using these techniques must still be verified in animal and human tests before licensing.  

Other non-animal simulators have been developed for military use to mimic battlefield-induced traumas or to simulate hemorrhaging, fractures, amputations and burns. But all these processes are very many orders of magnitude simpler than the most simplified version of a nucleus in the brain. If we were able to construct artificial models sufficiently similar to their biological counterpart that they could really serve as a substitute for analysis, we would have to know so many details about natural systems that the heuristic value of these models for basic research would be very limited.  

When mathematical theories, simulations, and most importantly careful (and mathematically sophisticated) analysis of data are used to support neurobiological research, they can indeed contribute to a certain reduction of animal experimentation. In the same fashion, being able to localize activations and understand the extent of the networks involved in a behavioral task with fMRI is immensely without wasting animals and time. Modeling can also contribute to the refinement of working hypotheses and can provide plausibility controls for the interpretation of experimental data. This in turn can serve to optimize experimental protocols and thus to reduce the number of experiments required for the solution of a particular problem. Nonetheless, all of the above are complements to animal experimentation, not substitutes.  


Functional MRI in humans  
The main advantages of fMRI lie in its non-invasive nature, ever-increasing availability, relatively high spatiotemporal resolution, and its capacity to demonstrate the entire network of brain areas engaged when subjects perform particular tasks.  

One disadvantage is that, like all modalities based on hemodynamics, it measures a surrogate signal whose spatial specificity and temporal response are subject to both physical and biological constraints. A more important shortcoming is that this surrogate signal reflects neuronal mass activity, a fact that even a great many cognitive psychologists specialists ignore or are unaware of. A layperson will have even greater difficulties in grasping the subtle limitations of this method, but I will attempt to explain the origins of its weaknesses.  

An examination of human fMRI studies shows that the commonly used spatial resolution is at best 3 x 3 x 5 millimeters. Keep in mind that less than 3% of this volume is occupied by the blood vessels on which neuroimaging is based, and then take a look at how much is going on in the other 97%. A typical unfiltered fMRI voxel of 55 µl in size contains 5.5 million neurons, 2.2–5.5 x 1010 synapses, 22 km of dendrites and 220 km of axons. Understanding the neuronal mechanisms underlying the function or dysfunction of a particular brain site by looking at such an enormous population of neural elements is practically impossible.  

The limitations of fMRI are not related to physics or poor engineering, and they are unlikely to be resolved by increasing the sophistication and power of our scanners; instead, they are due to the circuitry and functional organization of the brain itself, as well as to inappropriate experimental protocols that ignore this organization. The fMRI signal cannot easily differentiate between function-specific processing and neuromodulation or between bottom-up and top-down signals, and it may even confuse excitation and inhibition.  

The magnitude of the fMRI signal cannot be quantified to accurately reflect differences between brain regions or between tasks within the same region. The latter problem is not a result of our inability to accurately estimate the cerebral oxygen metabolism rate (CMRO2) from the BOLD signal, but to the fact that hemodynamic responses are sensitive to the size of the activated population. This itself is subject to change as the density of neural representations varies spatially and temporally.  

In cortical regions in which stimulus-or task-related perceptual or cognitive capacities are sparsely represented (for example, instantiated in the activity of a very small number of neurons), volume transmission—which probably underlies the altered states of motivation, attention, learning and memory—may dominate hemodynamic responses and make it impossible to deduce the exact role of the area in the task at hand.  Neuromodulation is also likely to affect the ultimate spatiotemporal resolution of the signal.  

Despite its shortcomings, however, fMRI is currently the best tool we have for gaining insights into brain function and formulating interesting and testable hypotheses, even though the plausibility of these hypotheses critically depends on the magnetic resonance technology being used, the experimental protocol, statistical analysis and insightful modeling. Theories on the brain’s functional organization (not just modeling of data) will probably be the best strategy for optimizing all of the above. But hypotheses formulated on the basis of fMRI experiments cannot really be analytically tested with fMRI itself in terms of neural mechanisms, and this is unlikely to change any time in the near future.  

Of course, fMRI is not the only methodology that has clear and serious limitations. Electrical measurements of brain activity, including invasive techniques with single or multiple electrodes, also fall short of affording real answers about network activity. Single-unit recordings and firing rates are better suited to the study of cellular properties than of neuronal assemblies, and field potentials share much of the ambiguity discussed in the context of the fMRI signal. None of the above techniques can be a substitute for the others.  

Today, a multimodal approach is more necessary than ever for the study of the brain’s function and dysfunction. Such an approach will require further improvements to MRI technology and its combination with other non-invasive techniques that directly assess the brain’s electrical activity, but it will also require a profound understanding of the neural basis of hemodynamic responses and a tight coupling of human and animal experimentation that will allow us to fathom the homologies between humans and other primates.  

Methods excluding animal experimentation are inadequate for understanding the brain’s function and disorders. If we really wish to understand how our brain works, we cannot afford to discard any relevant methodology, much less one providing direct information from the actual neural elements that underlie all our cognitive capacities.