Efficient signal processing in random networks that generate variability: A comparison of internally generated and externally induced variability
published: March 7, 2016, recorded: December 2015, views: 37
Report a problem or upload filesIf you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
The Source of cortical variability and its influence on signal processing remain an open question. We address the latter, by studying two types of randomly connected networks of quadratic integrateand-fire neurons with balanced excitation-inhibition that produce irregular spontaneous activity patterns: (a) a deterministic network with strong synaptic interactions that actively generates variability by chaotic dynamics (internal noise) and (b) a stochastic network that has weak synaptic interactions but receives noisy input (external noise), e.g. by stochastic vesicle releases. These networks of spiking neurons are analytically tractable in the limit of a large network-size and channel-time-constant. Despite the difference in their sources of variability, spontaneous activity patterns of these two models are indistinguishable unless majority of neurons are simultaneously recorded. We characterize the network behavior with dynamic mean field analysis and reveal a single-parameter family that allows interpolation between the two networks, sharing nearly identical spontaneous activity. Despite the close similarity in the spontaneous activity, the two networks exhibit remarkably different sensitivity to external stimuli. Input to the former network reverberates internally and can be successfully read out over long time. Contrarily, input to the latter network rapidly decays and can be read out only for short time. The difference between the two networks is further enhanced if input synapses undergo activity-dependent plasticity, producing significant difference in the ability to decode external input from neural activity. We show that, this difference naturally leads to distinct performance of the two networks to integrate spatio-temporally distinct signals from multiple sources. Unlike its stochastic counterpart, the deterministic chaotic network activity can serve as a reservoir to perform near optimal Bayesian integration and Monte-Carlo sampling from the posterior distribution. We describe implications of the differences between deterministic and stochastic neural computation on population coding and neural plasticity.
Download slides: netadis2015_dasgupta_random_networks_01.pdf (2.8 MB)
Link this pageWould you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !