Stationary Subspace Analysis
Report a problem or upload filesIf you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
Non-stationarities are an ubiquitous phenomenon in real-world data, yet they challenge standard Machine Learning methods: if training and test distributions differ we cannot, in principle, gen- eralise from the observed training sample to the test distribution. This affects both supervised and unsupervised learning algorithms. In a classification problem, for instance, we may infer spurious dependen- cies between data and label from the the training sample that are mere artefacts of the non-stationarities. Conversely, identifying the sources of non-stationary behaviour in order to better understand the analyzed system often lies at the heart of a scientific question. To this end, we propose a novel unsupervised paradigm: Stationary Subspace Analysis (SSA). SSA decomposes a multi-variate time-series into a stationary and a non-stationary subspace. We derive an efficient algorithm that hinges on an optimization procedure in the Special Orthogonal Group. By exploiting the Lie group structure of the optimization manifold, we can explicitly factor out the inherent symmetries of the problem and thereby reduce the number of parameters to the exact degrees of freedom. The practical utility of our approach is demonstrated in an application to Brain Computer-Interfacing (BCI).
Link this pageWould you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !