Robust Large-Scale Machine Learning in the Cloud
published: Sept. 27, 2016, recorded: August 2016, views: 1768
Report a problem or upload filesIf you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
The convergence behavior of many distributed machine learning (ML) algorithms can be sensitive to the number of ma-chines being used or to changes in the computing environment. As a result, scaling to a large number of machines can be challenging. In this paper, we describe a new scalable coordinate descent (SCD) algorithm for generalized linear models whose convergence behavior is always the same, regardless of how much SCD is scaled out and regardless of the computing environment. This makes SCD highly robust and enables it to scale to massive datasets on low-cost commodity servers. Experimental results on a real advertising dataset in Google are used to demonstrate SCD’s cost eﬀectiveness and scalability. Using Google’s internal cloud, we show that SCD can provide near linear scaling using thousands of cores for 1 trillion training examples on a petabyte of compressed data. This represents 10,000x more training examples than the ‘large-scale’ Netﬂix prize dataset. We also show that SCD can learn a model for 20 billion training examples in two hours for about $10.
Link this pageWould you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !