1 Billion Instances, 1 Thousand Machines and 3.5 Hours
published: Jan. 19, 2010, recorded: December 2009, views: 4086
Report a problem or upload filesIf you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
Training conditional maximum entropy models on massive data sets requires significant computational resources, but by distributing the computation, training time can be significant reduced. Recent theoretical results have demonstrated conditional maximum entropy models trained by weight mixtures of independently trained models converge at the same rate as traditional distributed schemes, but significantly faster. This efficiency is achieved primarily by reducing network communication costs, a cost not usually considered but actually quite crucial.
Link this pageWould you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !