Time-series information and unsupervised representation learning
published: Aug. 6, 2013, recorded: April 2013, views: 2867
Report a problem or upload filesIf you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
Numerous control and learning problems face the situation where sequences of high-dimensional highly dependent data are available, but no or little feedback is provided to the learner. To address this issue, we formulate the following problem. Given a series of observations $X_1,\dots,X_n$ coming from a large (high-dimensional) space $\cX$, find a representation function $f$ mapping $\cX$ to a finite space $\cY$ such that the series $f(X_1),\dots,f(X_n)$ preserve as much information as possible about the original time-series dependence in $X_1,\dots,X_n$. We show that, for stationary time series, the function $f$ can be selected as the one maximizing the time-series information $h_0(f(X))- h_\infty (f(X))$ where $h_0(f(X))$ is the Shannon entropy of $f(X_1)$ and $h_\infty (f(X))$ is the entropy rate of the time series $f(X_1),\dots,f(X_n),\dots$. Implications for the problem of optimal control are presented.
Download slides: machine_ryabko_representation_learning_01.pdf (97.3 KB)
Link this pageWould you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !