Sample Complexity of Testing the Manifold Hypothesis

author: Hariharan Narayanan, Laboratory for Information and Decision Systems, Massachusetts Institute of Technology, MIT
published: March 25, 2011,   recorded: December 2010,   views: 164
Categories

Slides

Related Open Educational Resources

Related content

Report a problem or upload files

If you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
Lecture popularity: You need to login to cast your vote.
  Bibliography

Description

The hypothesis that high dimensional data tends to lie in the vicinity of a low dimensional manifold is the basis of a collection of methodologies termed Manifold Learning. In this paper, we study statistical aspects of the question of fitting a manifold with a nearly optimal least squared error. Given upper bounds on the dimension, volume, and curvature, we show that Empirical Risk Minimization can produce a nearly optimal manifold using a number of random samples that is it independent of the ambient dimension of the space in which data lie. We obtain an upper bound on the required number of samples that depends polynomially on the curvature, exponentially on the intrinsic dimension, and linearly on the intrinsic volume. For constant error, we prove a matching minimax lower bound on the sample complexity that shows that this dependence on intrinsic dimension, volume and curvature is unavoidable. Whether the known lower bound for the sample complexity of Empirical Risk minimization on k-means applied to data in a unit ball of arbitrary dimension is tight, has been an open question since 1997. Here eps is the desired bound on the error and de is a bound on the probability of failure. We improve the best currently known upper bound. Based on these results, we devise a simple algorithm for k-means and another that uses a family of convex programs to fit a piecewise linear curve of a specified length to high dimensional data, where the sample complexity is independent of the ambient dimension.

See Also:

Download slides icon Download slides: nips2010_narayanan_sct_01.pdf (391.8 KB)

Download article icon Download article: nips2010_1076.pdf (319.0 KB)


Help icon Streaming Video Help

Link this page

Would you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !

Write your own review or comment:

make sure you have javascript enabled or clear this field: