**Training a Large Scale Classifier with the Quantum Adiabatic Algorithm**

Hartmut Neven, Vasil S. Denchev, Geordie Rose, William G. Macready

(Submitted on 4 Dec 2009)

Abstract: In a previous publication we proposed discrete global optimization as a method to train a strong binary classifier constructed as a thresholded sum over weak classifiers. Our motivation was to cast the training of a classifier into a format amenable to solution by the quantum adiabatic algorithm. Applying adiabatic quantum computing (AQC) promises to yield solutions that are superior to those which can be achieved with classical heuristic solvers. Interestingly we found that by using heuristic solvers to obtain approximate solutions we could already gain an advantage over the standard method AdaBoost. In this communication we generalize the baseline method to large scale classifier training. By large scale we mean that either the cardinality of the dictionary of candidate weak classifiers or the number of weak learners used in the strong classifier exceed the number of variables that can be handled effectively in a single global optimization. For such situations we propose an iterative and piecewise approach in which a subset of weak classifiers is selected in each iteration via global optimization. The strong classifier is then constructed by concatenating the subsets of weak classifiers. We show in numerical studies that the generalized method again successfully competes with AdaBoost. We also provide theoretical arguments as to why the proposed optimization method, which does not only minimize the empirical loss but also adds L0-norm regularization, is superior to versions of boosting that only minimize the empirical loss. By conducting a Quantum Monte Carlo simulation we gather evidence that the quantum adiabatic algorithm is able to handle a generic training problem efficiently.

[soon to be arxiv:0912.0779]

### Like this:

Like Loading...

*Related*

Pingback: What do mobile devices, quantum computing and artificial intelligence have in common?… a segue from last post | Rachna

Geordie,

Is equation 2 an Ising problem (and can the Ising solver be used to solve the referenced class of problems)?

Craig

Hi Craig

Eq. #2 is a qubo (the w_i variables are 0/1 variables). To convert to ISING map the w_i to +1/-1 variables. In this form it is an ISING problem.

Yes this is the type of problem we solved in hardware during the training of the classifier we’re about to present at NIPS.

Geordie,

I applaud the progress!

The Grover search algorithm that you formulated is applicable in the realm of quantitative finance. For instance, you can train the system to recognize an array exploitable pricing anomalies, and to search for them across a limitless range of securities in polynomial time.

I think I have a hunch about how that can be achieved, and will be blogging about it after the festive season, when I given it a bit more thought. I’ll heavily borrow the syntax in your paperđź™‚

[I hope you don’t mind me riding your coat tails]

-Craig

Hi Craig, note that what we’re implementing isn’t Grover’s algorithm, it’s an adiabatic quantum optimization algorithm. They are quite different. This might have been a bit confusing from Hartmut’s article, the Grover reference was just to give an idea about why quantum algorithms are interesting (reducing the number of steps to complete an algorithm).