A** loss function** is a measure of how well a choice for a set of weak classifiers performs. Let’s start by quantifying how we encode a particular combination of weak classifiers. Given a dictionary with D weak classifiers, define a D-dimensional vector of binary numbers .

The elements of that have value 1 represent weak classifiers that are ‘turned on’, and the elements that have value 0 represent weak classifiers that are ‘turned off’. For the choice shown in the figure in the previous post, the vector would have zeroes for all elements except 2, 4, 14, 33, 39 and 45, whose values would be 1. Here are three examples, for the D=52 dictionary defined in Algorithm 2:

This would represent the choice where no weak classifiers were included.

This would represent the choice where all 52 weak classifiers were included.

This would represent the choice where weak classifiers 2, 4, 14, 33, 39 and 45 were included, and all the others were turned off.

For any item in our training set, each choice for will produce a prediction for what the label should be. We can write this prediction as . Since all of our training data comes with labels , we can compare this prediction to the actual label. The way we will do this is that if the prediction is correct, we want the function that compares them to return zero, and if the prediction is wrong, we want it to return 1. If we then sum over all the elements in the training data, we get

The function is called the zero-one loss and simply counts the number of errors that a particular choice for made – if it got every single member of the training set right, will be zero. If it got them all wrong, will be equal to S, the total number of elements in the training set.

In the QBC algorithm, it will be necessary to only include terms in the loss function that are either linear or quadratic in the variables, because that’s the format that D-Wave hardware natively accepts. Because the function has a ‘sign’ function in it, we won’t be able to use this exact functional form. Instead, we will use a related form called the quadratic loss.

The quadratic loss has the form

Comparing this to the previous loss function, we see that the main difference is that we have removed the ‘sign’ function and replaced it with a normalized sum.

Having to convert the loss function from ‘the one we really want’ to ‘one the hardware can handle’ is not ideal, and is one of the drawbacks of the QBC algorithm. However it turns out that the quadratic loss performs well for building good classifiers in many circumstances. In later modules we’ll introduce procedures for expanding the kinds of optimization problems we can run in hardware, but for now we’ll just use the quadratic loss.

**Regularization**

When building a classifier, it is desirable to have as few weak classifiers included as possible – you want your strong classifier to be as simple as you can make it, while retaining some desired level of performance.

Overly complicated strong classifiers will tend to perform better on your validation set, but more poorly on your test set (and in the real world on examples it hasn’t seen yet). This happens because of a phenomenon known as over-fitting – a strong classifier formed using a large number of weak classifiers can adjust itself to the idiosyncrasies of your training set, and not generalize as well to as yet unseen examples as a simpler model.

You can think of over-fitting as being analogous to “memorization” of the training set features, without “understanding” what it is about these features that is behind the classification. A student who crams for an exam at the last moment by memorizing a large number of specific facts may very do quite well on the exam, because they have the specific facts they need in that situation memorized, but will have trouble applying the underlying concepts to other situations.

Here we will use what is known as zero-norm regularization to fight against over-fitting. This penalizes the inclusion of many weak classifiers.

The regularization term we will use is , where is an as yet unspecified real number.

**Ranking Performance: The Optimization Objective Function**

Now that we have a loss function and a regularization term, we’re ready to define a procedure for ranking the performance of combinations of weak classifiers. Here we’ll try to introduce the basic concepts. We’ll formalize the procedure in a later post.

In order to find the best possible set of weak classifiers to form our strong classifier, we construct a function that is the sum of the loss function and the regularization term:

We’ll refer to as the *optimization objective function*. For each choice of , returns a number. The first term in — the loss function term — returns numbers that are lower the closer gets to correctly labeling all elements of the training set. The second term in — the regularization term — just counts the number of ones in , multiplied by some number .

Now let’s see how we can use to do something useful. First, let’s find the settings of that return the lowest possible value of for some particular choice of (say ). We write this as

The ‘arg min’ notation simply means ‘return the value of that minimizes ’, and is that value.

Alright so what is ? It is just the list of weak classifiers that, when considered as a group, ‘perform best’ (meaning minimize the optimization objective function) for the choice of we made. In the case where we set , what ‘performs best’ means is that the set of weak classifiers given by has the lowest possible number of mismatches between predicted labels and actual labels for our training data.

We now have a possible candidate for our final strong classifier. Let’s call it to remember that it’s the result we found when using .

If we repeat this process for many different values of , the minimizer will change. If we look at the limit where is very large, the cost of including even a single weak classifier is prohibitive and will become the vector of all zeroes.

So for every value we set to, we get a candidate suggestion for a strong classifier. How can we select which of these might work best in the real world? The solutions returned for don’t take advantage of regularization, so you might expect these would suffer from over-fitting – the vector would have too many ones in it (i.e. too many weak classifiers). The solutions returned for very large will probably be too sparse to work well. Somewhere in between these will be the best choice.

In order to find what this best choice is, we now turn to our validation set. Recall that we partitioned our original data set into three groups – training, validation and test – and so far we’ve just used the training set.

For every value of we tried, we got a solution . We now rank these using the following procedure. For each solution we compute the validation error

The validation error simply counts the number of elements of the validation set a strong classifier candidate mislabels. We can then rank the vectors from lowest validation error (i.e. perform the best on the validation set) to highest validation error (perform the worst on the validation set).

We’d expect that where is either too high or too low would give higher validation errors, and somewhere in the middle is the lowest one. That’s the vector we want to output.

Once we have the value of that produces the smallest validation error, we’re done! That value encodes the final strong classifier that is the output of the QBC algorithm. It is the best possible strong classifier we could have made, given our choices for training and validation data and our dictionary of weak classifiers.

**Notation: Massaging the Optimization Objective Function**

In what follows it will be useful to have a couple of different ways of writing down our optimization objective function. In particular when we introduce the hardware, we usually write the form of the optimization problem in a different way that’s closer to the way we specify the machine language parameters of a chip.

The form we introduced in the previous section was

One thing we should do here is expand out the squared term in brackets and throw away all of the terms that aren’t functions of (these constants don’t affect the returned optimal value so we can ignore them). When we do this we can write

where and . Note that we multiplied all terms by a constant scaling factor just so that it’s easier to keep track of the ranges of the terms in the objective function.

We are going to refer to this way of writing the optimization objective function as the **QUBO representation**. QUBO stands for Quadratic Unconstrained Binary Optimization.

**You should convince yourself that multiplying out the original optimization objective function and collecting terms gives the expression above.**

For the purposes of coding up the QBC algorithm, the QUBO representation is all we will need. However as we’ll see in the next post, a slightly different way of looking at the problem is more natural for understanding what the hardware is doing.

If we make a change of variables from 0/1 binary variables to -1/+1 ‘spin’ variables by substituting into our QUBO representation, we can write our optimization objective function as

If we expand everything out, collect terms and throw away everything that doesn’t depend on the new optimization variables, we get

where and . We’ll refer to this version as the **Ising representation**, and we’ll see why this is a useful way to look at the problem in the next post.

水平太差看不懂,还得多学习。

Hello? I need the rest of the posts! I was just about to get this thing working!

Thanks for posting these notes – I look forward to the rest of the series.

BTW, I think there is a very minor typo in the equation for E_{v}(\lambda) near the end of ‘Ranking Performance’; the last variable y_s should be y_v.

I am looking forward to the rest of the posts!!! Amazingly well structured tutorial!