Relational Learning as Collective Matrix Factorization
published: Feb. 14, 2008, recorded: February 2008, views: 9991
Report a problem or upload filesIf you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
We present a unified view of matrix factorization models, including singular value decompositions, non-negative matrix factorization, probabilistic latent semantic indexing, and generalizations of these models to exponential families and non-regular Bregman divergences. One can model relational data as a set of matrices, where each matrix represents the value of a relation between two entity-types. Instead of a single matrix, relational data is represented as a set of matrices with shared dimensions and tied low-rank representation. Our example domain is augmented collaborative filtering, where both user ratings and side information about items are available. To predict the value of a relation, we extend Bregman matrix factorization to a set of related matrices. Using an alternating minimization scheme, we show the existence of a practical Newton step. The use of stochastic second-order methods for large matrices is also covered.
Link this pageWould you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !