New insights on parameter estimation
published: Oct. 6, 2014, recorded: December 2013, views: 110
Report a problem or upload filesIf you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
I will discuss two new developments in parameter estimation. First, I will show that it is possible to train most deep learning approaches - regardless of the choice of regularization, architecture, algorithms and datasets - by learning only a small number of the weights and predicting the rest with nonparametric methods. Often, this approach makes it possible to learn only 10% of the weights without a drop in accuracy. Second, I will introduce a new method (LAP) for parameter estimation in loopy undirected probabilistic graphical models of sparse connectivity. In several domains of practical interest - e.g., grid MRFs and chimera lattices used in quantum annealing computers - previous statistically efficient estimators had an exponential computational complexity in the size of the model. In these domains, the new approach reduces the complexity from exponential to linear.
Link this pageWould you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !