l
CNS*2023 workshop, Leipzig, 18th-19th July,

Workshop on
Optimality, evolutionary trade-offs, Pareto theory and degeneracy in neuronal modeling

by
Alexander Bird (Justus-Liebig University, Giessen & Ernst Strüngmann Institute, Frankfurt)
Philipp Norton (Humboldt University, Berlin)
Peter Jedlicka (Justus-Liebig University, Giessen & Ernst Strüngmann Institute, Frankfurt)
Susanne Schreiber (Humboldt University, Berlin)

pic
Inspired by
Pallasdies et al (2021) and Jedlicka et al (2022)

Speakers

Astrid Prinz (Emory University, USA)
Ethan Sorrell (standing in for Tim O'Leary) (University of Cambridge, UK)
Marcel Oberländer (MPI-Caesar, Germany)
Anna Levina (University of Tübingen, Germany)
Rishikesh Naraynan (Indian Institute of Science, India)
Arnd Roth (University College London, UK)
Albert Gidon (Humboldt University, Germany)
Suhita Nadkarni (Indian Institute of Science Education and Research, India)
Linus Manubens-Gil (Southeast University, China)
Wiktor Mlynarski (IST, Vienna; LMU, Munich)
Dylan Festa (Technical University of Munich, Germany)



Titles and Abstracts


Astrid Prinz: Parameter degeneracy in neural cells


Ethan Sorrell: Decoding a degenerate cortical representation to control virtual navigation


Marcel Oberländer: Dissecting cellular and circuit mechanisms for sensation and perception in the cerebral cortex


Anna Levina: Near-critical optimality: sensitivity and confidence trade offs
The brain is supposed to be optimized for information processing and decision making; at the same time, it is expected to act reliably in the face of multiple sources of external and internal noise. These two qualities, in many cases, are not coinciding in the same "parameter-space." Particularly, closeness to the critical state is shown in many cases to be optimizing the information processing capabilities. However, it is also associated with the largest noise-sensitivity and very long relaxation-times. In my talk, I am going to showcase this trade-off and discuss possible solutions.

Rishikesh Narayanan: Efficient information coding and degeneracy in the nervous system
Efficient information coding (EIC) is a universal biological framework rooted in the fundamental principle that system responses should match their natural stimulus statistics for maximizing environmental information. In this talk, the following arguments about EIC will be discussed:
1) EIC is ubiquitous: spans species, systems, and scales. This contrasts prevailing assumptions that EIC is limited to sensory systems neuroscience. Illustrative examples spanning all scales - molecular to behavioral - across species will be presented to emphasize the ubiquitous nature of EIC.
2) Biological complexity and associated degeneracy are effective substrates to achieve EIC in conjunction with system stability. Complexity in a system is characterized by several functionally segregated subsystems that show a high degree of functional integration when they interact with each other. Degeneracy, the ability of structurally different components to yield the same functional outcome, provides an efficacious substrate for implementing stable EIC in biological systems.

Arnd Roth: Energy-efficient axons, active dendrites and sparse coding
Energy is a major constraint for the operation and evolution of the brain. We showed that fast Na+ current decay and delayed K+ current onset during action potentials in nonmyelinated mossy fibres of the rat hippocampus minimize the overlap of their respective ion fluxes. This results in total Na+ influx and associated energy cost per action potential of only 1.3 times the theoretical minimum, in contrast to the factor of 4 implied by the Hodgkin-Huxley model of the squid giant axon. Simulations showed that the same action potential voltage waveform can be approximated by underlying Na+ and K+ current waveforms with different kinetics, resulting in different energy expenditures per action potential, and that the experimentally measured Na+ and K+ current kinetics minimize this energy cost. Next, an active pyramidal cell model and a model of the synaptic inputs it receives in response to sensory stimulation – both constrained by physiological and anatomical data – were used to simulate dendritic integration in vivo. The combined model shows that small numbers of strong excitatory synapses can trigger dendritic Na+ and NMDA spikes. In turn, only few dendritic spikes are sufficient to drive an action potential at the soma. As a consequence, as few as 1% of the synaptic inputs to a neuron can determine the tuning of its somatic output in vivo. These results suggest that dendritic spikes can help to make sensory representations more efficient and flexible: they require fewer connections to sustain them, and only a small number of connections need to be changed to encode a different stimulus and alter the response properties of a neuron.

Albert Gidon: Dendritic complexity: bug or feature?
Does the complexity of biological neurons, particularly their dendrites, and the rich nonlinear spiking behavior arise due to biological and evolutionary constraints or serve as a means to gain computational power? This long-standing question is more relevant today as the relatively simple artificial neurons with well-defined input/output functions achieve remarkable success. In contrast, in-depth studies of biological neurons in the last decade uncover even more complex dendritic behavior, yet their input/output functions are still poorly understood. This s talk will approach this question from an electrophysiological perspective and explores this topic by looking into our published and new data acquired from human and rodent cortical dendrites of pyramidal cells and interneurons in an attempt to gain a better understanding of the intricacies of the biological neuron behavior.

Suhita Nadkarni: Energy-information prescription for synapses
Synapses are hotspots for learning and memory and can be extremely complex. They possess diverse morphologies, receptor types, ion channels, and second messengers. The differences between synaptic designs across brain areas suggest a link between synaptic form and function. Additionally, several brain disorders have a synaptic basis. However, direct measurements at a synapse are often difficult. Motivated by the synapses' vital role in brain function and the experimental constraints that may pose a barrier to a complete understanding of brain function, we construct ‘in silico' models of synapses with unprecedented detail. This modeling framework lets us address fundamental questions about information processing at synapses and make quantitative predictions. The human brain makes up about 2% of the bodyweight but uses about 25% of the body's total energy budget. Within that, signal transmission at synapses alone is an energetically expensive process and consumes more than 50% of the brain's total energy. If every electrical impulse generated in the brain is transmitted to the connected synapses, the brain will need at least five times more energy than it already consumes. The CA3-CA1 synapse in the hippocampus is a crucial component of the neural circuit associated with learning. This synapse has a curiously low fidelity — Only 1 in 5 impulses are transmitted. The low transmission rate suggests a synaptic design that lowers energy consumption is favored. However, unreliable transmission can lead to a massive loss of information. We used information transmission and energy utilization, fundamental constraints that govern the neural organization, to gain insights into the relationship between form and function of this synapse. We show that unreliable neurotransmitter release and its activity-dependent enhancement (short-term plasticity), a characterizing attribute of this synapse, maximizes information transmitted in an energetically cost-effective manner. Remarkably, our analysis reveals that synapse-specific quirks ensure information rate is independent of the release probability. Thus, even as ongoing long-term memory storage continues to fuel heterogeneity in synaptic strengths, individual synapses maintain robust information transmission.

Anna Levina: Near-critical optimality: sensitivity and confidence trade offs
The brain is supposed to be optimized for information processing and decision making; at the same time, it is expected to act reliably in the face of multiple sources of external and internal noise. These two qualities, in many cases, are not coinciding in the same "parameter-space." Particularly, closeness to the critical state is shown in many cases to be optimizing the information processing capabilities. However, it is also associated with the largest noise-sensitivity and very long relaxation-times. In my talk, I am going to showcase this trade-off and discuss possible solutions.

Linus Manubens-Gil: Can Pareto optimality help us understand intellectual disabilities?
Intellectual disability provides an excellent opportunity to explore the relevance of fine structural details because many disorders show specific architectural alterations that correlate with cognitive performance. We aimed to study how the network topology of neuronal circuits is affected by dendritic architectural features in a mouse model of Down's syndrome, and upon the rewiring effect of pro-cognitive treatment. We did so with an exploration of a 2D minimal computational model of cortical layer II/III parameterized by experimental data on dendritic tree architecture of healthy mice and two Down syndrome mouse models. Our work suggests that the dendritic tree architecture and the distribution of synaptic contacts have significant implications on how optimal single neurons are for information processing efficiency and storage capacity, and that those single-neuron features permeate to the network level, determining the computational capacities of neural ensembles. However, morphologies obtained with synthetic explorations show capacities well beyond those of wild-type neurons, suggesting that additional constraints should be added to tether the morphospace exploration to biologically realistic scenarios.

Wiktor Mlynarski: Tradeoffs, distributions and lucky coincidences in biological function
Our normative understanding of biological systems has been guided by the notion of optimality for many decades. In such a view, one proposes a hypothetical problem solved by a studied system, derives an optimal solution to that problem ab-initio, and compares such a prediction with data. While very successful, this approach may not be capturing sufficiently the richness of biological phenomena, such as trade-offs between different goals, multiple-realizability, degeneracy of solutions and exaptation. In this talk I will discuss a view proposing that we should depart from the notion of individual "optima” to probabilistic description of good solutions of specific biological problems. Using our past results, I will discuss a possibility that such an approach could capture a range of phenomena in a single, unified theoretical framework, enabling not just a broader normative perspective, but also a practical approach to data analysis.

Dylan Festa: The role of excitatory and inhibitory plasticity in the spontaneous formation of robust neural assemblies
Neural assemblies are thought to be the fundamental units of neural processing in the brain. "Assembly-like" activity emerges early in development, characterized by high correlations among subsets of neurons. It is often assumed that these shared activations originate from a local, self-organized connectivity structure, where neurons in the same assemblies are more strongly connected to each other than to neurons in other assemblies. However, the formation and maintenance mechanisms of these assemblies, and particularly the trade-offs between functional effectiveness, energy consumption, and robustness to environmental variations, remain elusive. Existing models focused on how assemblies can self-organize purely from excitatory to excitatory interactions. However, inhibitory neurons are an integral part of neural networks, stabilizing network activity and playing diverse roles in neural coding. Experiments have shown that inhibitory activity can be selectively tuned to stimuli or to decisions along with excitation, leading to the hypothesis that assemblies are composed by co-tuned E/I components. In this work, we propose a concurrent mechanism of spontaneous assembly formation, where inhibitory interactions take a crucial role. We model spiking recurrent networks where the early consolidation of a bidirectional excitatory/inhibitory structure forms an "inhibitory scaffolding", which in turn favors the robust formation of recurrent excitatory assemblies. We then compare the properties of these networks with those that self-organize purely from excitatory to excitatory interactions. Additionally, with the same plasticity rules, networks can self-repair after widespread synaptic damage, and preserve a general assembly structure even when heavily perturbed, resembling representational drift. In conclusion, by including the essential contribution of an inhibitory component, our results advance the understanding of possible self-organization mechanisms in spiking recurrent networks.

Alex Bird: Degeneracy and Pareto optimality in single neuron modelling


Philipp Norton: Understanding trade-offs with Pareto theory