Time-series information and unsupervised representation learning

author: Daniil Ryabko, SequeL lab, INRIA Lille - Nord Europe
published: Aug. 6, 2013,   recorded: April 2013,   views: 2872


Related Open Educational Resources

Related content

Report a problem or upload files

If you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
Lecture popularity: You need to login to cast your vote.


Numerous control and learning problems face the situation where sequences of high-dimensional highly dependent data are available, but no or little feedback is provided to the learner. To address this issue, we formulate the following problem. Given a series of observations $X_1,\dots,X_n$ coming from a large (high-dimensional) space $\cX$, find a representation function $f$ mapping $\cX$ to a finite space $\cY$ such that the series $f(X_1),\dots,f(X_n)$ preserve as much information as possible about the original time-series dependence in $X_1,\dots,X_n$. We show that, for stationary time series, the function $f$ can be selected as the one maximizing the time-series information $h_0(f(X))- h_\infty (f(X))$ where $h_0(f(X))$ is the Shannon entropy of $f(X_1)$ and $h_\infty (f(X))$ is the entropy rate of the time series $f(X_1),\dots,f(X_n),\dots$. Implications for the problem of optimal control are presented.

See Also:

Download slides icon Download slides: machine_ryabko_representation_learning_01.pdf (97.3┬áKB)

Help icon Streaming Video Help

Link this page

Would you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !

Write your own review or comment:

make sure you have javascript enabled or clear this field: