Implementing efficient "shotgun" inference of neural connectivity from highly sub-sampled activity data
published: March 7, 2016, recorded: December 2015, views: 1246
Report a problem or upload filesIf you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
Inferring connectivity in neuronal networks remains a key challenge in statistical neuroscience. The common input problem presents a major roadblock: it is difficult to reliably distinguish causal connections between pairs of observed neurons versus correlations induced by common input from unobserved neurons. Available techniques allow us to simultaneously record, with sufficient temporal resolution, only a small fraction of the network. Consequently, naive connectivity estimators that neglect these common input effects are highly biased. In a recent work  we propose a shotgun experimental design, in which we observe multiple subnetworks briefly, in a serial manner. Thus, while the full network cannot be observed simultaneously at any given time, we may be able to observe much larger subsets of the network over the course of the entire experiment, thus ameliorating the common input problem. To perform network inference given this type of data, in which only a small fraction of the network is observed in each time bin, we developed a scalable Bayesian method. The method is derived using a cavity-style approximation to the gradient of the expected loglikelihood - in a generalized linear model for a spiking recurrent neural network. Simulation demonstrated that the shotgun experimental design can eliminate the biases induced by common input effects. Networks with thousands of neurons, in which only a small fraction of the neurons is observed in each time bin, can be quickly and accurately estimated, achieving orders of magnitude speed up over previous approaches.
Link this pageWould you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !