
Meanfield models of neuronal populationsAlain Destexhe, CNRS, March 2021.IntroductionThe meanfield technique is well known in statistical physics, and was widely used to design models of the macroscopic states of matter (such as solid, liquid, gas) from the microscopic properties of atoms or molecules. Transposed to neurons, the meanfield approach consists of deriving populationlevel models based on the properties of single neurons and their interactions. As we will see below, this approach is not only important for linking scales, but it also enables us to design largescale models of neural tissue, from small brain regions up to the whole brain.Another motivation for the design of meanfield models is that many brain signals measure the activity of neurons at a larger scales than single neurons. This is the case for “mesoscopic signals” (hundred of microns to millimeters), such as voltagesensitive dye (VSD) signals, local field potential (LFP) signals or calcium imaging signals. In such imaging measurements, the smallest visible “unit” of the system is typically the pixel of the camera, which represents the averaged activity over a population of neurons. So there is no point of modeling such signals at the scale of single neurons, because they are not visible at this scale. It is much more appropriate to use populations of neurons as the unit of such models. Note that there is also a motivation related to the computational difficulty of simulating large scales. In VSD or widefield calcium imaging, one measures an entire brain area and sometimes the whole hemisphere. Modeling this at the cellular level would require to simulate networks of hundreds of millions (if not billions) of neurons, which is only possible using large highperformance computer resources. However, modeling at the level of pixels or populations, requires to model only of the order of tens to hundred thousand variables, which is usually possible on a single desktop workstation. The Master Equation approachOur first approach to meanfield models, in collaboration with Sami El Boustani (PhD student in my laboratory), was to design a meanfield model applicable to conductancebased spiking networks [1] (while most meanfield models are derived for currentbased interactions). Our study had two particularities: (1) we considered selfsustained irregular activity states (asynchronous irregular or AI states) where the activity of neurons is highly stochastic; (2) we consider a secondorder approach where not only the mean activity but also its variance (and more generally the covariance matrix of the system) are described. This was realized by deriving a Master Equation for the activity of the network. This approach successfully reproduced the complex state diagrams calculated numerically in networks of excitatory and inhibitory neurons (Fig. 1; see details in [1]). Figure 1: Meanfield model of activated states in networks of neurons.A. Networks of randomlyconnected excitatory and inhibitory IF neurons with conductancebased synaptic interactions display asynchronous irregular (AI) states. The raster (red=excitatory cells, blue=inhibitory) shows that spike discharges are irregular, so is the instantaneous activity (firing rate, bottom). B. Decay of the autocorrelation function (dashed line = exponential fit) and activity distribution (dashed line = Gaussian fit) during AI states. C. Results of a Master Equation model which can be used to predict the state diagrams of such networks. The colorized region corresponds to AI states. The firing rate and its standard deviation (as well as crosscorrelations) are well predicted by the formalism. Similar results have been obtained for locallyconnected networks. Modified from El Boustani and Destexhe, Neural Computation, 2009 (see abstract). The Master Equation approach [1], although successful, suffered from one major drawback. The meanfield model relies on knowing the transfer function of neurons, which maps the output firing rate of the neuron to the mean rate of excitatory and inhibitory inputs. Unfortunately, this function is only known analytically for very simple systems (the leaky integrate and fire neuron with currentbased synapses). The approach in [1] was obtained through a heuristic modification of the transfer function to match conductancebased inputs. Therefore, extending this approach to more realistic neurons seemed compromised. Semianalytic meanfield modelsHowever, a recent extension of the approach could be done in collaboration with Yann Zerlaut (PhD student in my laboratory). It was discovered that the mathematical form of the transfer function known for simple systems, can also capture the transfer function of more complex neuron models, and even real neurons [2]. This important advance allowed us to extend the Master Equation approach to derive meanfield models of more complex neural models, as we will detail below. This new extension, which we called “semianalytic” consists of calculating numerically the transfer function for model neurons, and fit numerically the mathematical template of the transfer function. The meanfield model obtained thus remains analytical, in the sense that the transfer functions are still expressed mathematically, but the parameters of the transfer function are obtained numerically and are specific to each particular neuronal model.The first application of the semianalytic meanfield approach was to derive a meanfield model of networks of Adaptive Exponential (AdEx) neurons [3]. The AdEx model is a more realistic model than the leaky integrate and fire model, because it has an exponential approach to threshold and also has an adaptation variable. The AdEx model can capture a wide variety of intrinsic neuronal properties, such as adapting neurons (also called “regular spiking”), bursting neurons, delayed firing, intermittent firing, etc. In the case of cerebral cortex, it allows one to design networks with two cell types, the “regular spiking” (RS) neurons, displaying spikefrequency adaptation, as typically seen in pyramidal neurons, and the “fast spiking” (FS) neurons, with little or no adaptation, as typically seen in inhibitory interneurons. It is important to realize that networks of RSFS neurons constitutes the simplest spiking model that accounts for the fact that inhibitory neurons are more excitable than excitatory neurons, which has potentially important consequences at the largescale level, as we will see below. Thus, it was shown that the MasterEquation meanfield approach could model very well networks of AdEx neurons [3]. The meanfield model accounted for important features, such as the fact that AdEx networks can display AI states of activity, with low firing rate of RS cells and larger rates for inhibitory FS cells, exactly as found experimentally. The meanfield also captures the time course of the response of the network to external inputs, except for the “tail” of the response, which depends on the adaptation. We will see below that including adaptation allows the model to fully capture this time course. Meanfield models of mesoscopicscale phenomenaThe AdEx meanfield model was then used to test its ability to model largescale phenomena. We used measurements of propagating waves in awake monkey visual cortex [4] as a template. These measurements showed that visual inputs can trigger a traveling wave in V1, that occurs through millimeters of tissue which corresponds to the mesoscopic scale. We constructed a largescale network of meanfield models and could successively model the occurrence of propagating waves following visual input [3]. But here, not only the model could account for the mesoscopiclevel phenomena such as propagating waves, we could use these waves to constrain the model. Because propagating waves are phenomena that can objectively be measured (its extend, speed, etc), they were very useful to constrain the connectivity between the meanfield units in the largescale model. We found that the optimal connectivity reproducing the wave properties are with largerange excitatory connectivity, while inhibitory connectivity had to be strong and more local [3] (Fig. 2).Figure 2: Propagating waves in a largescale network of AdEx meanfield units. Left: scheme of the AdEx network of meanfield units (bottom) and estimate of connectivity (top). Right: spacetime plots of propagating waves measured experimentally in awake monkey (top) and reproduced using the meanfield model (bottom). Modified from Zerlaut et al. J. Comp. Neurosci., 2018 (see abstract). A further application of the AdEx meanfield model was to investigate the mechanism and role of cortical propagating waves. Extending the Zerlaut et al. [3] approach shown in Fig. 2, we have used meanfield models of V1 propagating waves to investigate their mechanisms and roles [5]. By using a clever series of experiments where two waves were triggered by two stimuli, we discovered that during the collision between the waves, the combined activity was largely sublinear, so there was a significant suppression associated to these waves, which contrasts with an amplification as could be thought intuitively. In the model, the suppression could be precisely reproduced and we found that it depends on two ingredients, first the synaptic interactions need to be conductancebased to account for this nonlinearity, and second, the gain of inhibitory (FS) neurons needed to be larger than excitatory (FS) neurons. As mentioned above, the use of the AdEx model allowed us to correctly reproduce this difference of gain, and the meanfield model could capture this. Finally, by using an external decoder, it was found that this suppression allows the visual system to disambiguate stimuli, and thus augments visual acuity. The meanfield model of V1 reproduced all these features (see details in [5]). Biologically realistic meanfield modelsThe next step was to obtain a meanfield model that accurately predicts the behavior of the spiking model. As mentioned above, this requires to properly take into account adaptation. In collaboration with Matteo di Volo (postdoc in my laboratory), we designed a meanfield model including adaptation [6]. This model was rederived from the Master Equation, but by adding a populationlevel variable for spikefrequency adaptation. The agreement between this adapting meanfield model and the network simulations was remarkable, as it captures the fine details of the time course of the population response to external inputs. Moreover, the adapting meanfield model was able to account for a fundamental phenomenon: the response of a given network depends on its state of ongoing (spontaneous) activity, which is also called statedependent responses. The adapting meanfield model could account for statedependent responses, and correctly predicted the fact that some network states produce small responses while other network states are more responsive. We believe this is a crucial property to correctly model interactions between brain areas.Another important feature of the adapting meanfield model is that it can also account for the genesis of different brain states, and in particular the slowwave activity with Up/Down state dynamics. In the spiking network, a transition from selfsustained AI state activity to Up/Down state activity can be obtained by modulating the spikefrequency adaptation parameter in the AdEx model [7]. Note that these oscillations do not correspond to a limit cycle, but they are a noisedriven switch between attractors, one is the AI state as described previously, and the other one is the silent state (with all cells at rest). The adaptive meanfield model could very well reproduce these dynamics and could also produce the transition from sustained AI states to Up/Down state dynamics [6]. It is therefore capable of displaying the two states fundamental to asynchronous and slowwave dynamics as found in the waking and sleeping brain (see below for a largescale simulation of this). Note that in Up/Down states, the silent and active phase may need a statedependent meanfield approach to be finely modeled ([8]; in collaboration with Cristiano Capone, postdoc in my laboratory). Because of the fact that the adaptive meanfield finely captures the time course of responses to external input, the fact that it accounts for statedependent responses, and because it can model asynchronous and Up/Down state dynamics, we believe that this adaptive meanfield model is the most accurate that has been drawn, and can be qualified as “biologically realistic”. It will constitute the basis of the largescale models shown below.Meanfield models of macroscopic phenomenaThe next step, towards macroscopic scales, was to integrate the meanfield models to model phenomena at the scale of several brain areas, up to the entire brain. In collaboration with Jennifer Goldman (postdoc in my laboratory) and others, we used The Virtual Brain (TVB) as a simulation platform to incorporate the adaptive meanfield model into a large network of meanfield units, where the connectivity is given by the human brain connectome (Fig. 3, top). This model, called the “TVBAdEx” model [9], was shown to generate at large scale, two fundamental dynamical states, asynchronousirregular (AI) and UpDown states, which correspond to the asynchronous and synchronized dynamics of wakefulness and slowwave sleep, respectively. The synchrony of slow waves appears as an emergent property at large scales when the units are set into the Up/Down state mode (high level of adaptation). This synchrony was lost when the units were set in the asynchronous (AI) mode. The model also reproduced the very different patterns of functional connectivity found experimentally in slowwaves compared to asynchronous states. Thus, the TVBAdEx model can simulate many features of the awake and sleeping brain [9].Besides spontaneous activity, the TVBAdEx model was also tested against external stimulation. We simulated experiments with transcranial magnetic stimulation (TMS) during asynchronous and slowwave states, and showed that, like in experimental data, the effect of the stimulation greatly depends on the activity state of the brain. During slow waves, the response is strong but remains local, in contrast with asynchronous states, where the response is weaker but propagates across brain areas (Fig. 3, bottomleft). To compare more quantitatively with wake and slowwave sleep states, we computed the perturbational complexity index (PCI) and show that it matches the values estimated from TMS experiments. In the synchronized and sleeping brain, PCI was low, reflecting the local aspect of the response. However, in the asynchronous and awake brain, PCI is high, which reflects the fact that the brain is much more responsive during that state (Fig. 3, bottomright). Thus, the TVBAdEx model replicates some of the properties of synchrony and responsiveness seen in the human brain, and is a promising tool to study spontaneous and evoked largescale dynamics in the normal, anesthetized or pathological brain. Figure 3: Wholebrain simulations using the TVBAdEx model. Top: scheme of the integration of AdEx adaptive meanfield models (from [6]) in The Virtual Brain (TVB) simulator. The connectivity between individual nodes (each represented by a meanfield) is taken from the human connectome. Bottomleft: response to a stimulus in the occipital region in synchronized and asynchronous states. Bottomright: perturbational complexity index (PCI) calculated from these responses, for three different stimulus amplitudes. The PCI is high for asynchronous states, and lower in synchronized states, as found experimentally. Modified from Goldman et al. bioRxiv 424574, 2020 (see paper)Limitations of the meanfield approachThe meanfield approach described here can be very accurate in some cases, but it suffers from a number of drawbacks and limitations. First, as mentioned above, the approach relies on the knowledge of the transfer function of neurons. This works well for a number of models, such as the integrateandfire model, the AdEx and even the HodgkinHuxley model. However, the transfer function is not easy to define in other cases, such as bursting neurons for example. In the thalamus, relay neurons can respond very differently if they are depolarized or hyperpolarized, so in such cases, it is difficult to define a proper transfer function.A second limitation is related to the assumption that the system decorrelates itself with a characteristic time (called T in the formalism of [1]). The choice of T is not very precise. It is formally defined as a period of time which is such that the activity of the system depends only on the preceding period of duration T. It was shown that T corresponds to the characteristic decay time of the autocorrelation of the system, which for AI states is between 5 and 10 ms [1]. In some cases, changing the precise value of T may lead to changes of behavior of the meanfield model. To properly study this potential problem, one should perform a precise mapping of the parameter space of the AdEx model (as it was done in [1] for the integrateandfire model). A third limitation, somewhat related to the choice of T, is that in theory, the meanfield model is not valid for dynamics faster than the period T. This is in part because the formalism makes an adiabatic approximation (that the system reaches a quasi steadystate within T; see [1] for details). Consequently, oscillations faster than 1/T frequency would formally not be consistent with this formalism. The behavior of the meanfield in the oscillatory regime still remains to be explored and understood in detail.Further developments of meanfield modelsA number of developments are presently under way. First, we would like to augment the biological realism of the meanfield models. This is done first, by using transfer functions calculated from neurons with dendrites [10] (in collaboration with Yann Zerlaut). In this work, we considered the transfer function of neurons departing from the “pointneuron” model, and included dendrites. The fact that synaptic inputs occurs on dendrites may have strong consequences on the transfer function, and thus also influences the emergent behavior at larger scales. It will also allow us to apply the formalism to other neuron types for which dendrites are important, in regions such as cerebellum, hippocampus, basal ganglia, etc (work in progress).A second development is to conceive a new class of meanfield models based on the properties found in real neurons. Thanks to the semianalytic approach, it is possible to measure the transfer function from real neurons. However, it is non trivial because the inputs must be conductancebased, so dynamicclamp experiments should be used. A first study of this kind was done recently, where we measured the transfer function of Layer V cortical neurons in mice visual cortex by using perforated patch recordings [2]. This study revealed that it is possible to obtain a compact description of the transfer function of individual pyramidal neurons, which opens the perspective of building truly “realistic” meanfield models. The study also evidenced a strong celltocell diversity of firing responses. It suggests that appropriate meanfield formalisms have to be designed in order to integrate this diversity (see below). In collaboration with Claude Bedard (postdoc and later permanent researcher in my laboratory), we used the meanfield formalism to model electromagnetic phenomena in the brain [11]. The justification is here again provided by the macroscopic nature of brain signals or measurements. For example, impedance measurements done macroscopically at the level of millimeters or centimeters of brain tissue require a macroscopic formulation to be correctly modeled. Such a formulation was obtained by deriving a meanfield model directly from Maxwell equations [12]. This approach was initially motivated by accounting for impedance measurements, and was later extended to the currentsource density analysis [13], which is also inherently macroscopic. In these formulations, one can directly integrate the macroscopic measurements of electric conductivity and permittivity, and obtain a coherent description of electromagnetic phenomena at large scales. Another application of the semianalytic meanfield approach was to calculate meanfield models of networks of complex neurons, described by the HodgkinHuxley (HH) formalism [14] (in collaboration with Mallory Carlu, Damien Depannemaecker and Matteo di Volo, postdocs in my laboratory). HH models are biophysically more accurate and complex compared to integrate and fire models. However, using the semianalytic approach, the transfer function could be calculated and integrated in a meanfield model. The resulting meanfield model was able to reproduce the spontaneous activity and responses to external inputs in HH networks of neurons. The same approach was also followed for MorrisLecar neurons (see [14] for details). We also used the meanfield approach to study electricfield interactions in cerebral cortex [15], in collaboration with Bartosz Telenczuk (postdoc in my laboratory) and Mavi SanchezVives (U. Barcelona). In this study, we combined cortical slice experiments with meanfield models to investigate how neighboring cortical columns can interact through electric fields (dipoledipole interactions). In this case, the meanfield model integrated the electricfield effect of the neighboring column, as measured in the experiments. The study shows that this electric interaction tends to synchronize cortical columns [15]. Thus, cortical populations forming electric dipoles have a means of interaction which is quasiinstantaneous, and in any case much faster than conventional synaptic transmission. Finally, in collaboration with Matteo di Volo, we extended the meanfield approach to model heterogeneous systems [16]. As shown by large neuron databases (such as the Allen Brain Atlas), as well as our own experimental investigations [2], neurons are extraordinarily heterogeneous. Even in the same cell class, individual neurons have a very different excitability, as shown by the very different transfer functions estimated [2]. It is therefore necessary to depart from the usual paradigm of networks made of identical neurons, and consider the more realistic case of heterogeneous networks. Interestingly, networks of heterogeneous neurons display a different responsiveness than homogeneous networks, and are optimally responsive for intermediate levels of responsiveness that correspond to experimental estimates [16]. This responsiveness profile could very well be modeled by a Heterogeneous MeanField (HMF) framework, where the distributions of cell properties are explicitly taken into account [16]. This HMF also showed that there exists a relation between the responsiveness and the stability properties of the asynchronous state, which is an interesting direction to develop further in the future. References Alain Destexhe 