## Graph Transduction via Alternating Minimization

author: Jun Wang, Department of Electrical Engineering, Columbia University
published: Aug. 1, 2008,   recorded: July 2008,   views: 267
Categories
You might be experiencing some problems with Your Video player.

# Slides

0:00 Slides Graph Transduction via Alternating Minimization Outline of the presentation Graph Transduction – Review (1) Graph Transduction – Review (2) Graph Transduction – Review (3) Graph Transduction – Review (4) Graph Transduction – Problem Cases Methodology – Our Choice for ς Methodology–Label Regularizer Methodology – Optimize F Methodology – Gradient Greedy (1) Methodology – Gradient Greedy (2) Methodology – Gradient Greedy (1) Methodology – Gradient Greedy (2) Final Algorithm Some Intuition Intuition Computation Efficiency Experiments –Toy Data Experiments – WebKB Data Experiments – USPS Digits Data Summary - Questions

# Report a problem or upload files

If you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.

# Description

Graph transduction methods label input data by learning a classification function that is regularized to exhibit smoothness along a graph over labeled and unlabeled samples. In practice, these algorithms are sensitive to the initial set of labels provided by the user. For instance, classification accuracy drops if the training set contains weak labels, if imbalances exist across label classes or if the labeled portion of the data is not chosen at random. This paper introduces a propagation algorithm that more reliably minimizes a cost function over both a function on the graph and a binary label matrix. The cost function generalizes prior work in graph transduction and also introduces node normalization terms for resilience to label imbalances. We demonstrate that global minimization of the function is intractable but instead provide an alternating minimization scheme that incrementally adjusts the function and the labels towards a reliable local minimum. Unlike prior methods, the resulting propagation of labels does not prematurely commit to an erroneous labeling and obtains more consistent labels. Experiments are shown for synthetic and real classification tasks including digit and text recognition. A substantial improvement in accuracy compared to state of the art semi-supervised methods is achieved. The advantage are even more dramatic when labeled instances are limited.