Learning from Labeled and Unlabelled Data: When the Smoothness Assumption Holds
published: March 11, 2011, recorded: February 2011, views: 4767
Report a problem or upload filesIf you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
During recent years, there has been a growing interest in learning algorithms capable of utilizing both labeled and unlabeled data for prediction tasks. The reason for this attention is the cost of assigning labels which can be very high for large datasets. Two main settings have been proposed in the literature to exploit information contained in both labeled and unlabeled data: the semi-supervised setting and the transductive setting. The former is a type of inductive learning, since the learned function is used to make predictions on any possible observation. The latter asks for less, since it is only interested in making predictions for a set of unlabeled data known at the learning time.
By focusing on the transductive setting, we discuss the underlying smoothness assumption and its validity for several data types characterized by (positive) autocorrelation, such as spatial and networked data. In particular, we report of the application of transductive learning approaches to these data types and results obtained in domains characterized by scarcity of labelled data. Finally, we discuss the transductive setting in the more general perspective of relational data mining.
Link this pageWould you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !