About
In the current era of web-scale datasets, high throughput biology and astrophysics, and multilanguage machine translation, modern datasets no longer fit on a single computer and traditional machine learning algorithms often have prohibitively long running times. Parallelized and distributed machine learning is no longer a luxury; it has become a necessity. Moreover, industry leaders have already declared that clouds are the future of computing, and new computing platforms such as Microsoft’s Azure and Amazon’s EC2 are bringing distributed computing to the masses.
The machine learning community has been slow to react to these important trends in computing, and it is time for us to step up to the challenge. While some parallel and distributed machine learning algorithms already exist, many relevant issues are yet to be addressed. Distributed learning algorithms should be robust to node failures and network latencies, and they should be able to exploit the power of asynchronous updates. Some of these issues have been tackled in other fields where distributed computation is more mature, such as convex optimization and numerical linear algebra, and we can learn from their successes and their failures.
The workshop aims to draw the attention of machine learning researchers to this rich and emerging area of problems and to establish a community of researchers that are interested in distributed learning. We would like to define a number of common problems for distributed learning (online/batch, synchronous/ asynchronous, cloud/cluster/multicore) and to encourage future research that is comparable and compatible. We also hope to expose the learning community to relevant work in fields such as distributed optimization and distributed linear algebra. The daylong workshop aims to identify research problems that are unique to distributed learning. The target audience includes leading researchers from academia and industry that are interested in distributed and large-scale learning.
Workshop homepage: http://lccc.eecs.berkeley.edu/
Related categories
Uploaded videos:
Welcome Address
Opening remarks on the Workshop Learning on Cores, Clusters, and Clouds
Jan 13, 2011
·
3780 Views
Keynote Speakers
Averaging algorithms and distributed optimization
Jan 13, 2011
·
7966 Views
Machine Learning in the Cloud with GraphLab
Jan 13, 2011
·
9017 Views
Tutorial
Vowpal Wabbit
Jan 13, 2011
·
19662 Views
Lectures
Optimal Distributed Online Prediction Using Mini-Batches
Jan 13, 2011
·
4634 Views
MapReduce/Bigtable for Distributed Optimization
Jan 13, 2011
·
6657 Views
Distributed MAP Inference for Undirected Graphical Models
Jan 13, 2011
·
4814 Views
Gradient Boosted Decision Trees on Hadoop
Jan 13, 2011
·
24164 Views
Mini Talks
Building Heterogeneous Platforms for End-to-end Online Learning Based on Dataflo...
Jan 13, 2011
·
3810 Views
A Convenient Framework for Efficient Parallel Multipass Algorithms
Jan 13, 2011
·
3671 Views
Parallel Online Learning
Jan 13, 2011
·
3436 Views
The Learning Behind the Gmail Priority Inbox
Jan 13, 2011
·
5591 Views
Learning to Rank on a Cluster using Boosted Decision Trees
Jan 13, 2011
·
10166 Views
Parallel Splash Gibbs Sampling
Jan 13, 2011
·
3994 Views
Distributed Markov chain Monte Carlo
Jan 13, 2011
·
4081 Views
All-Pairs Nearest Neighbor Search on Manycore Systems
Jan 13, 2011
·
3259 Views