## Training a Binary Classifier with the Quantum Adiabatic Algorithm

published: Dec. 20, 2008,   recorded: December 2008,   views: 3142
Categories
You might be experiencing some problems with Your Video player.

# Slides

0:00 Slides Training a Binary Classifier with the Quantum Adiabatic Algorithm Outline Solving hard optimization problems using adiabatic quantum computing - 1 Solving hard optimization problems using adiabatic quantum computing - 2 The adiabatic theorem Why bother using AQC? D-Wave’s hardware - 1 D-Wave’s hardware - 2 Training a binary classifier Hyperplanes Hyperplanes in N = 3 Modifications I: reduce bits to represent weights Modifications II: quadratic loss Implementation details: dictionaries Implementation details: classical optimization methods Experiments I: Synthetic data Results for synthetic data with order 1 dictionary Results for synthetic data with order 2 dictionary Experiments II: Natural data Conclusions

# Report a problem or upload files

If you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.

# Description

This paper describes how to make the problem of binary classiﬁcation amenable to quantum computing. A formulation is employed in which the binary classiﬁer is constructed as a thresholded linear superposition of a set of weak classiﬁers. The weights in the superposition are optimized in a learning process that strives to min- imize the training error as well as the number of weak classiﬁers used. No efﬁcient solution to this problem is known. To bring it into a format that allows the applica- tion of adiabatic quantum computing (AQC), we ﬁrst show that the bit-precision with which the weights need to be represented only grows logarithmically with the ratio of the number of training examples to the number of weak classiﬁers. This allows to effectively formulate the training process as a binary optimization prob- lem. Solving it with heuristic solvers such as tabu search, we ﬁnd that the resulting classiﬁer outperforms a widely used state-of-the-art method, AdaBoost, on a va- riety of benchmark problems. Moreover, we discovered the interesting fact that bit-constrained learning machines often exhibit lower generalization error rates. Changing the loss function that measures the training error from 0-1 loss to least squares maps the training to quadratic unconstrained binary optimization. This corresponds to the format required by D-Wave’s implementation of AQC. Simu- lations with heuristic solvers again yield results better than those obtained with boosting approaches. Since the resulting quadratic binary program is NP-hard, additional gains can be expected from applying the actual quantum processor.