Making Very Large-Scale Linear Algebraic Computations Possible Via Randomization
published: Jan. 19, 2010, recorded: December 2009, views: 8707
Slides
Related content
Report a problem or upload files
If you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
Description
The demands on software for analyzing data are rapidly increasing as ever larger data sets are generated in medical imaging, in analyzing large networks such as the World Wide Web, in image and video processing, and in a range of other applications. To handle this avalanche of data, any software used must be able to fully exploit modern hardware characterized by multiple processors and capacious but slow memory. The evolution of computer architecture is currently forcing a shift in algorithm design away from the classical algorithms that were designed for single-processor computers with all the data available in Random Access Memory (RAM), towards algorithms that aggressively minimize communication costs. This tutorial will describe a set of recently developed techniques for standard linear algebraic computations (such as computing a partial singular value decomposition of a matrix) that are very well suited for implementation on multi-core or other parallel architectures, and for processing data stored on disk, or streamed. These techniques are based on the use of randomized sampling to reduce the effective dimensionality of the data. Remarkably, randomized sampling does not only loosen communication constraints, but does so while maintaining, or even improving, the accuracy and robustness of existing deterministic techniques.
Link this page
Would you like to put a link to this lecture on your homepage?Go ahead! Copy the HTML snippet !
Write your own review or comment: