Sparse coding on D-Wave hardware: setting up the problem

Sparse coding is a very interesting idea that we’ve been experimenting with. It is a way to find ‘maximally repeating patterns’ in data, and use these as a basis for representing that data.

Some of what’s going to follow here is quite technical. However these are beautiful ideas. They are probably related to how human perception and cognition functions. I think sparse coding is much more interesting and important than, say, the Higgs Boson.

Everything starts from data

Twenty five data objects. Each is a 28x28 pixel greyscale image.

Twenty five data objects. Each is a 28×28 pixel greyscale image.

Sparse coding requires data. You can think of ‘data’ as a (usually large) set of objects, where the objects each can be represented by a list of real numbers. As as example, we’ll use the somewhat pathological MNIST handwritten digits data set. But you can use any dataset you can imagine. Each data object in this set is a small (28×28 pixel) greyscale image of a handwritten digit. A 28×28 pixel greyscale image can be represented by 28×28 = 784 numbers, each in the range 0..255. The training set has 60,000 of these.

We can represent the entire data set using a two-dimensional array, where there are 60,000 columns (one per image), and 784 rows (one for each pixel). Let’s call this array {Z}_0.

Technical detail: this thing is bigger than it has to be

What the first few MNIST images look like, keeping an increasingly large number of SVD modes.

What the first few MNIST images look like, keeping an increasingly large number of SVD modes.

One thing you may notice about MNIST is that each image kind of looks mostly similar. In fact we can exploit this to get a quick compression of the data. The trick we will use is called Singular Value Decomposition (SVD). SVD quickly finds a representation of the data that allows you to reduce its dimensionality. In the case of MNIST, it turns out that instead of using 784 pixel values, we can get away with using around 30 SVD modes instead, with only a small degradation in image quality. If we perform an SVD on {Z}_0 and keep only the parts corresponding to the largest 30 singular values, we get a new matrix {Z} which still has 60,000 columns, but only 30 rows. Let’s call the s^{th} column vector \vec{z}_s, which stores the s^{th} compressed image. This trick, and others related to it, can always be used to pre-process any raw data we have.

The Dictionary

Let’s now create a small number — let’s say 32 — of very special images. The exact number of these actually matters quite a bit (it’s an example of something called a hyperparameter, many of which lurk in machine learning algorithms and are generally a giant pain in the ass), but the most important thing is that it should be larger than the dimension of the data (which in this case is 30). When this is the case, it’s called an overcomplete basis, which is good for reasons you can read about here.

These images will be in the exact same format as the data we’re learning over. We’ll put these in a new two dimensional array, which will have 32 columns (one for each image) and 30 rows (the same as the post-SVD compressed images above). We’ll call this array \hat{D} the dictionary. Its columns are dictionary atoms. The j^{th} dictionary atom is the column vector \vec{d}_j. These dictionary atoms will store the ‘maximally repeating patterns’ in our data that are what we’re looking for.

At this stage, we don’t know what they are — we need to learn them from the data.

The sparse coding problem

Now that we have a data array, and a placeholder array for our dictionary, we are ready to state the sparse coding problem.

Now to see how all this works, imagine we want to reconstruct an image (say \hat{z}_s) with our dictionary atoms. Imagine we can only either include or exclude each atom, and the reconstruction is a linear combination of the atoms. Furthermore, we want the reconstruction to be sparse, which means we only want to turn on a small number of our atoms. We can formalize this by asking for the solution of an optimization problem.

L0-norm sparse coding

Define a vector of binary (0/1) numbers \vec{w} of length 32. Now solve this problem:

Find \vec{w} and \hat{D} = [\vec{d}_1 \vec{d}_2... \vec{d}_{31} \vec{d}_{32}] (remember the vectors \vec{d}_k are column vectors in the matrix \hat{D}) that minimize

G(\vec{w}, \hat{D} ; \lambda) = || \vec{z}_s - \sum_{k=1}^{32} w_k \vec{d}_k ||^2 + \lambda \sum_{k=1}^{32} w_k

The real number \lambda is called a regularization parameter (another one of those hyperparameters). The larger this number is, the bigger the penalty for adding more dictionary atoms to the reconstruction — the rightmost term counts the number of atoms used in the reconstruction. The first term is a measure of the reconstruction error. Minimizing this means minimizing the distance between the data (sometimes called the ground truth) \vec{z}_s and the reconstruction of the image \sum_{k=1}^{32} w_k \vec{d}_k.

[Note that this prescription for sparse coding is different that the one typically used, where the weights \vec{w} are real valued and the regularization term is of the L1 (sum over absolute values of the weights) form.]

You may see a simple way to globally optimize this. All you have to do is set \vec{d}_1 = \vec{z}_s, \vec{d}_k = 0 for all the other k, w_1 = 1 and w_k = 0 for all the other k — in other words, store the image in one of the dictionary atoms and only turn that one on to reconstruct. Then the reconstruction is perfect, and you only used one dictionary atom to do it!

OK well that’s useless. But now say we sum over all the images in our data set (in the case of MNIST this is 60,000 images). Now instead of the number of images being less than the number of dictionary atoms, it’s much, much larger. The trick of just ‘memorizing’ the images in the dictionary won’t work any more.

Now over all the data

Let’s call the total number of data objects S. In our MNIST case S = 60,000. We now need to find \hat{W} (this is a 32xS array) and \hat{D} that minimize

G(\hat{W}, \hat{D} ; \lambda) = \sum_{s=1}^S || \vec{z}_{s} - \sum_{k=1}^{32} w_{ks} \vec{d}_k ||^2 + \lambda \sum_{s=1}^S \sum_{k=1}^{32} w_{ks}

This is the full sparse coding prescription — in the next post I’ll describe how to solve it, what the results look like, and introduce a bunch of ways to make good use of the dictionary we’ve just learned!

One thought on “Sparse coding on D-Wave hardware: setting up the problem

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s