# Sparse coding on D-Wave hardware: finding an optimal structured dictionary

I spend most of my time thinking about machine intelligence. I would like to build machines that can think and act like we do. There are many hard problems to solve before we get there.

C. elegans is vermiform, with a cuticle integument and a fluid-filled pseudocoelomate cavity. There are two sexes: male and hermaphrodite.

A thing I’ve been thinking about recently is how the limits of cognition matter in understanding cognition itself. Clearly you can have examples where the machinery is not sophisticated enough. For example a C. elegans roundworm is not going to reverse engineer its own neural system any time soon, even though it is very simple (although the descendents of openworm might…).

Openworm is a really interesting idea. I hope it does well and succeeds. Although the idea of the first life forms on the internet being worms, that OBVIOUSLY will grow super intelligent and take over the universe and consume all its atoms making bigger and bigger Harlem Shake videos, is a little off-putting from a human perspective. De Vermis Mysteriis

We are so smart, s-m-r-t

Hoomans are smrt.

Imagine the most intelligent entity possible. Would that thing be able to understand its own cognition? As you crank up a cognitive system’s ability to model its environment, presumably the cognition system itself gets more difficult to understand.

Is the human cognitive system both smart enough and simple enough to self reverse engineer? It’s probably in the right zone. We seem to be smart enough to understand enough of the issues to take a good run at the problem, because our cognition system is simple enough to not be beyond the ability of our cognition system itself to understand. How’s that for some Hofstadter style recursion.

Anyway enough with the Deep Thoughts. Let’s do some math! Math is fun. Not as fun as universe eating worms. But solving this problem well is important. At least to me and my unborn future vermiform army. Maybe you can help solve it.

A short review of L0-norm sparse coding with structured dictionaries

Last time we discussed sparse coding on the hardware, I introduced an idea for getting around a problem in using D-Wave style processor architectures effectively – the mismatch between the connectivity of the problem we want to solve and the connectivity of the hardware.

Let’s begin by first reviewing the idea. If you’d like a more in-depth overview, here is the original post I wrote about it. Here is the condensed version.

Given

1. A set of $S$ data objects $\vec{z}_s$, where each $\vec{z}_s$ is a real valued vector with $N$ components;
2. An $N x K$ real valued matrix $\hat{D}$, where $K$ is the number of dictionary atoms we choose, and we define its $k^{th}$ column to be the vector $\vec{d}_k$;
3. A $K x S$ binary valued matrix $\hat{W}$, whose matrix elements are $w_{ks}$;
4. And a real number $\lambda$, which is called the regularization parameter,

Find $\hat{W}$ and $\hat{D}$ that minimize

$G(\hat{W}, \hat{D} ; \lambda) = \sum_{s=1}^S || \vec{z}_{s} - \sum_{k=1}^{K} w_{ks} \vec{d}_k ||^2 + \lambda \sum_{s=1}^S \sum_{k=1}^{K} w_{ks}$

subject to the constraints that $\vec{d}_j \cdot \vec{d}_m = 0$ for all pairs $j,m$ that are not connected in the quantum chip being used.

To solve this problem, we use block coordinate descent, which works like this:

1. First, we generate a random dictionary $\hat{D}$, subject to meeting the orthogonality constraints we’ve imposed on the dictionary atoms.
2. Assuming these fixed dictionaries, we solve the optimization problem for the dictionary atoms $\hat{W}$. These optimization problems are now Chimera-structured QUBOs that fit exactly onto the hardware by construction.
3. Now we fix the weights to these values, and find the optimal dictionary $\hat{D}$, again subject to our constraints.

We then iterate steps 2 and 3 until $G$ converges to a minimum, keeping in mind that this problem is jointly non-convex and the minimum you get will be a local minimum. Each restart of the whole algorithm from a new standing point will lead to a different local minimum, so a better answer can be had by running this procedure several times.

Step 3: Finding an optimal structured dictionary given fixed weights

The hard problem is Step 3 above. Here the weights $\hat{W}$ are fixed, and we want to find an optimal structured dictionary. Here is the formal statement of the problem.

Given

1. An $N x S$ real valued matrix $\hat{Z}$, where $S$ is the number of data objects, and we define the $s^{th}$ column to be the $s^{th}$ data object $\vec{z}_s$, where each $\vec{z}_s$ is a real valued vector with $N$ components, and the matrix elements of $\hat{Z}$ are $z_{ns}$;
2. An $N x K$ real valued matrix $\hat{D}$, where $K$ is the number of dictionary atoms we choose, and we define its $k^{th}$ column to be the vector $\vec{d}_k$, and the matrix elements of $\hat{D}$ are $d_{nk}$;
3. And a $K x S$ binary valued matrix $\hat{W}$ with matrix elements $w_{ks}$;

Find $\hat{D}$ that minimizes

$G^{*}(\hat{D}) = \sum_{s=1}^S || \vec{z}_{s} - \sum_{k=1}^{K} w_{ks} \vec{d}_k ||^2$

${=\sum_{s=1}^S \sum_{n=1}^N (z_{ns}-\sum_{k=1}^{K} w_{ks} d_{nk})^2}$

${=||\hat{Z}-\hat{D} \hat{W}||^2=Tr(\hat{A}^T \hat{A})}$

where $\hat{A}=\hat{Z}-\hat{D} \hat{W}$, subject to the constraints that $\vec{d}_j \cdot \vec{d}_m = 0$ for all pairs $j,m$ that are not connected in the quantum chip being used.

What makes this problem hard is that the constraints on the dictionary atoms are non-linear, and there are a lot of them (one for each pair of variables not connected in hardware).

Ideas for attacking this problem

I’m not sure what the best approach is for trying to solve this problem. Here are some observations:

• We want to be operating in the regime where $\hat{W}$ is sparse. In this limit most of the $w_{ks}$ will be zero. Because the coupling term is quadratic in $\hat{W}$‘s matrix elements, for all L0-norm sparse coding problems most of the coupling terms are going to be zero. This suggests a possible strategy where we could first solve for $\hat{D}$ assuming that the quadratic term was zero, and then whatever we do next could use this as an initial starting point.
• There are some types of matrix operations that would not mess up the structure of the dictionary but would allow parametrization of changes within the allowed space. If we could then optimize over those parameters we could take care of the constraints without having to do any work to enforce them.
• There is a local search heuristic where you can optimize each dictionary atom $\vec{d}_k$ moving from $k=1$ to $k=K$ in order while keeping the other columns fixed, and just iterating until convergence (you have to do some rearranging to ensure the orthogonality is maintained throughout using the null space idea in the previous post). This will probably by itself not be a great strategy and will probably get you stuck in local optima but maybe it would work OK.

What do you think? I might be able to get you time on a machine if you can come up with an interesting way to solve this problem effectively…

# New Nature Communications paper and a bonus NPR interview

Baths can be beneficial, even for qubits.

One of the most important things to try to understand about real quantum computers is how they behave in the presence of environments. Sometimes these environments are called ‘baths’ by physicists. I like this term because it’s really evocative of what’s physically going on. You can imagine any quantum system you’re building as always being ‘bathed’ in the glow of these environments.

It’s a very interesting fact that you can never get away from these baths, even in principle. No object in our universe — as far as we know — can be completely isolated from the rest of the universe. As Lawrence Krauss so eloquently describes, even ‘nothing’ is something.

Even if we were to build a quantum computer in the depths of interstellar space, and cool it to zero temperature, it would still be bathed in a bath formed of the virtual particles that boil and seethe in the fabric of space-time itself. There is no escape from our connections to the physical universe.

A Lovecraftian aside that has nothing to do with the paper or the NPR interview

By the way you Lovecraft fans out there — here is a famous bit from The Dream-Quest of Unknown Kadath:

[O]utside the ordered universe [is] that amorphous blight of nethermost confusion which blasphemes and bubbles at the center of all infinity—the boundless daemon sultan Azathoth, whose name no lips dare speak aloud, and who gnaws hungrily in inconceivable, unlighted chambers beyond time and space amidst the muffled, maddening beating of vile drums and the thin monotonous whine of accursed flutes.

“Azathoth has existed since the universe began. He dwells outside normal time and space. He is blind, idiotic, and indifferent.” Now go watch Krauss describe “Something from Nothing” and tell me they’re not talking about the same thing!

Lovecraft had an uncanny ability to grok modern concepts from physics and weave them into his stories. His descriptions of Azathoth, and the physics underlying Krauss’ explanations of what seems to be physically occurring deep inside the fabric of spacetime, are just too close to not point out. Of course they use different language. But think carefully about the context in which these ideas are being delivered. (Am I stretching making a connection between Krauss’ something that lives in nothing and Lovecraft’s description of Azathoth? Definitely. But I think it’s an interesting thing to think about how these two descriptions might not be incompatible.)

Back to qubits and baths

Anyway back to qubits and baths. This is not just fascinating science (although it is that). It is also a fundamentally important issue in constructing computing machines that harness quantum mechanics. Because all quantum systems MUST live in baths, it’s extremely important to understand in detail how these baths affect their behavior.

Not so long ago, it was suspected that these baths would always destroy the curious properties of quantum mechanics for large objects. But then this turned out to not be true. The first large objects where quantum behavior remained even in the presence of really big and hot baths were loops of superconducting metal — the great – great – great grandparents of our qubits.

Now the question of what effect these baths really have on large collections of large objects is being debated, and goes to the heart of many of the technical issues in building useful quantum computers.

The paper that just published

The paper that just published is called Thermally assisted quantum annealing of a 16-qubit problem.

It describes what I believe to be a key result in advancing this understanding. It looks very carefully at what happens to a quantum system in the presence of a bath, where both the quantum system and the bath have been exquisitely characterized. As was the case when macroscopic quantum coherence was first observed, the results are counter-intuitive.

Here is the abstract from the paper.

Efforts to develop useful quantum computers have been blocked primarily by environmental noise. Quantum annealing is a scheme of quantum computation that is predicted to be more robust against noise, because despite the thermal environment mixing the system’s state in the energy basis, the system partially retains coherence in the computational basis, and hence is able to establish well-defined eigenstates. Here we examine the environment’s effect on quantum annealing using 16 qubits of a superconducting quantum processor. For a problem instance with an isolated small-gap anticrossing between the lowest two energy levels, we experimentally demonstrate that, even with annealing times eight orders of magnitude longer than the predicted single-qubit decoherence time, the probabilities of performing a successful computation are similar to those expected for a fully coherent system. Moreover, for the problem studied, we show that quantum annealing can take advantage of a thermal environment to achieve a speedup factor of up to 1,000 over a closed system.

The key result is that for the specific type of bath acting on a real processor, the quantum effects required for quantum computation can successfully be tapped by protecting them in a specific way. Specifically — and this is a point that has caused much confusion — the decoherence time of the individual qubits, which is the time to decohere in the energy basis, does not set the timescale for losing quantum coherence in the measurement basis. Quantum coherence in the measurement basis (which is the resource tapped in this approach) is an equilibrium property of the system, as long as the bath is not so big and hot that well defined energy eigenstates disappear.

While the paper is primarily an experimental paper, the theory underlying all of this is very satisfactory in my view. Mohammad and his collaborators have developed a very good theoretical understanding of what really happens in real open quantum systems, and the agreement between these models and what is seen in the lab is striking.

So congratulations to all on this result.

The NPR interview and my proudness at working ‘meatiest’ into a national radio program

On a mostly unrelated note, here is a radio piece that Geoff Brumfiel of NPR did recently. It is of note because I managed to work in the word ‘meatiest’ into the discussion, of which I am understandably quite proud.

# The Google / NASA Quantum Artificial Intelligence Lab

Update 20/05/2013: Here is how you can apply for time on the system. Exciting!

Update 16/05/2013: Here is some press coverage of the announcement.

When D-Wave was founded in 1999, our objective was to build the world’s first useful quantum computer.

The way I thought about it was that we’d have succeeded if: (a) someone bought one for more than \$10M; (b) it was clearly using quantum mechanics to do its thing; and (c) it was better at something than any other option available. Now all of these have been accomplished, and the original objectives that we’d set for ourselves have all been met.

A historic shot? Hartmut and friends at QIP-2010.

As the hardware matured, we began exploring ways to use its special capabilities. One of the first people I met who was also interested in this problem was Dr. Hartmut Neven, who works at Google. Hartmut is a world leading expert in computer vision, and believed that there might be a role for our technology in computer vision and more generally machine learning.

Machine learning is an important subfield of artificial intelligence. While it is very difficult to even define what intelligence is (there are even more definitions than for quantum computers), one thing that is pretty much universally recognized is that anything we’d call intelligent must be able to learn. Trying to understand how learning from experience works has driven a lot of progress in understanding how human perception and cognition might work.

The Quantum Artificial Intelligence Lab’s mandate is to bring the world’s best machine learning experts together with the world’s most advanced quantum computers, and perform thousands of experiments to explore to what extent machine intelligence and cognition can be advanced by using these new types of computers.

The quest to understand intelligence is one of the most interesting and important challenges that humanity has ever faced. It is a daunting problem. But so was building quantum computers, or even conventional computers for that matter. I believe we can apply the same principles we used to solve the quantum computing problem to the (much harder) problem of understanding how intelligence works.

# First ever head to head win in speed for a quantum computer

Update 5/9/2013:

Update 5/15/2013:

• An update from the conference from Cathy: “Computing Frontiers 2013 Best Paper Award: Experimental Evaluation of an Adiabatic Quantum Computation System for Combinatorial Optimization, by McGeoch and Wang.” Congratulations to Cathy and Carrie!

AMHERST, Mass.A computer science professor at Amherst College who recently devised and conducted experiments to test the speed of a quantum computing system against conventional computing methods will soon be presenting a paper with her verdict: quantum computing is, “in some cases, really, really fast.”

“Ours is the first paper to my knowledge that compares the quantum approach to conventional methods using the same set of problems,” says Catherine McGeoch, the Beitzel Professor in Technology and Society (Computer Science) at Amherst. “I’m not claiming that this is the last word, but it’s a first word, a start in trying to sort out what it can do and can’t do.”

The quantum computer system she was testing, produced by D-Wave just outside Vancouver, BC, has a thumbnail-sized chip that is stored in a dilution refrigerator within a shielded cabinet at near absolute zero, or .02 degrees Kelvin in order to perform its calculations. Whereas conventional computing is binary, 1s and 0s get mashed up in quantum computing, and within that super-cooled (and non-observable) state of flux, a lightning-quick logic takes place, capable of solving problems thousands of times faster than conventional computing methods can, according to her findings.

# Sparse coding on D-Wave hardware: structured dictionaries

The underlying problem we saw last time, that prevented us from using the hardware to compete with tabu on the cloud, was the mismatch of the connectivity of the problems sparse coding generates (which are fully connected) and the connectivity of the hardware.

The source of this mismatch is the quadratic term in the objective function, which for the $j^{th}$ and $m^{th}$ variables is proportional to $\vec{d}_j \cdot \vec{d}_m$. The coupling terms are proportional to the dot product of the dictionary atoms.

Here’s an idea. What if we demand that $\vec{d}_j \cdot \vec{d}_m$ has to be zero for all pairs of variables $j$ and $m$ that are not connected in hardware? If we can achieve this structure in the dictionary, we get a very interesting result. Instead of being fully connected, the QUBOs with this restriction can be engineered to exactly match the underlying problem the hardware solves. If we can do this, we get closer to using the full power of the hardware.

L0-norm sparse coding with structured dictionaries

Here is the idea.

Given

1. A set of $S$ data objects $\vec{z}_s$, where each $\vec{z}_s$ is a real valued vector with $N$ components;
2. An $N x K$ real valued matrix $\hat{D}$, where $K$ is the number of dictionary atoms we choose, and we define its $k^{th}$ column to be the vector $\vec{d}_k$;
3. A $K x S$ binary valued matrix $\hat{W}$;
4. And a real number $\lambda$, which is called the regularization parameter,

Find $\hat{W}$ and $\hat{D}$ that minimize

$G(\hat{W}, \hat{D} ; \lambda) = \sum_{s=1}^S || \vec{z}_{s} - \sum_{k=1}^{K} w_{ks} \vec{d}_k ||^2 + \lambda \sum_{s=1}^S \sum_{k=1}^{K} w_{ks}$

subject to the constraints that $\vec{d}_j \cdot \vec{d}_m = 0$ for all pairs $j,m$ that are not connected in the quantum chip being used.

The only difference here from what we did before is the last sentence, where we add a set of constraints on the dictionary atoms.

Solving the sparse coding problem using block coordinate descent

We’re going to use the same strategy for solving this as before, with a slight change. Here is the strategy we’ll use.

1. First, we generate a random dictionary $\hat{D}$, subject to meeting the orthogonality constraints we’ve imposed on the dictionary atoms.
2. Assuming these fixed dictionaries, we solve the optimization problem for the dictionary atoms $\hat{W}$. These optimization problems are now Chimera-structured QUBOs that fit exactly onto the hardware by construction.
3. Now we fix the weights to these values, and find the optimal dictionary $\hat{D}$, again subject to our constraints.

We then iterate steps 2 and 3 until $G$ converges to a minimum.

Now we’re in a different regime than before — step 2 requires the solution of a large number of chimera-structured QUBOs, not fully connected QUBOs. So that makes those problems better fits to the hardware. But now we have to do some new things to allow for both steps 1 and 3, and these initial steps have some cost.

The first of these is not too hard, and introduces a key concept we’ll use for Step 3 (which is harder). In this post I’ll go over how to do Step 1.

Step 1: Setting up an initial random dictionary that obeys our constraints

Alright so the first step we need to do is to figure out under what conditions we can achieve Step 1.

There is a very interesting result in a paper called Orthogonal Representations and Connectivity of Graphs. Here is a short explanation of the result.

Imagine you have a graph on $V$ vertices. In that graph, each vertex is connected to a bunch of others. Call $p$ the number corresponding to the connectivity of the least connected variable in the graph. Then this paper proves that you can define a set of real vectors in dimension $V - p$ where non-adjacent nodes in the graph can be assigned orthogonal vectors.

So what we want to do — find a random dictionary $\hat{D}$ such that $\vec{d}_j \cdot \vec{d}_m = 0$ for all $k, m$ not connected in hardware — can be done if the length of the vectors $\vec{d}$ is greater than $V - p$.

For Vesuvius, the number $V$ is 512, and the lowest connectivity node in a Chimera graph is $p = 5$. So as long as the dimension of the dictionary atoms is greater than 512 – 5 = 507, we can always perform Step 1.

Here is a little more color on this very interesting result. Imagine you have to come up with two vectors $\vec{g}$ and $\vec{h}$ that are orthogonal (the dot product $\vec{g} \cdot \vec{h}$ is zero). What’s the minimum dimension these vectors have to live in such that this can be done? Well imagine that they both live in one dimension — they are just numbers on a line. Then clearly you can’t do it. However if you have two dimensions, you can. Here’s an example: $\vec{g} = \hat{x}$ and $\vec{h} = \hat{y}$. If you have more that two dimensions, you can also, and the choices you make in this case are not unique.

More generally, if you ask the question “how many orthogonal vectors can I draw in an $V$-dimensional space?”, the answer is $V$ — one vector per dimension. So that is a key piece of the above result. If we had a graph with $V$ vertices where NONE of the vertices were connected to any others (minimum vertex connectivity $p = 0$), and want to assign vectors to each vertex such that all of these vectors are orthogonal to all the others, that’s equivalent to asking “given a $V$-dimensional space, what’s the minimum dimension of a set of vectors such that they are all orthogonal to each other?”, and the answer is $V$.

Now imagine we start drawing edges between some of the vertices in the graph, and we don’t require that the vectors living on these vertices be orthogonal. Conceptually you can think of this as relaxing some constraints, and making it ‘easier’ to find the desired set of vectors — so the minimum dimension of the vectors required so that this will work is reduced as the graph gets more connected. The fascinating result here is the very simple way this works. Just find the lowest connectivity node in the graph, call its connectivity $p$, and then ask “given a graph on $V$ vertices, where the minimum connectivity vertex has connectivity $p$, what’s the minimum dimension of a set of vectors such that non-connected vertices in the graph are all assigned orthogonal vectors?”. The answer is $V - p$.

Null Space

Now just knowing we can do it isn’t enough. But thankfully it’s not hard to think of a constructive procedure to do this. Here is one:

1. Generate a matrix $\hat{D}$ where all entries are random numbers between +1 and -1.
2. Renormalize each column such that each column’s norm is one.
3. For each column in $\hat{D}$ from the leftmost to the rightmost in order, compute the null space of that column, and then replace that column with a random column written in the null space basis.

If you do this you will get an initial random orthonormal basis as required in our new procedure.

By the way, here is some Python code for computing a null space basis for a matrix $\hat{A}$. It’s easy but there isn’t a native function in numpy or scipy that does it.

import numpy
from scipy.linalg import qr

def nullspace_qr(A):

A = numpy.atleast_2d(A)
Q, R = qr(A.T)
ns = Q[:, R.shape[1]:].conj()
return ns

OK so step 1 wasn’t too bad! Now we have to deal with step 3. This is a harder problem, which I’ll tackle in the next post.

# Some new Rainier science

Here is a short break from the sparse coding mayhem. A recent paper by some interesting folks appeared today on the arxiv. They ran some experiments on the Rainier-based system at USC.

Here is some of what they found:

Our experiments have demonstrated that quantum annealing with more than one hundred qubits takes place in the D-Wave One device… the device has sufficient ground state quantum coherence to realise a quantum annealing of a transverse field Ising model.

Here is a link to the arxiv paper.

# Sparse coding on D-Wave hardware: things that don’t work

Ice, ice baby.

For Christmas this year, my dad bought me a book called Endurance: Shackleton’s Incredible Voyage, by Alfred Lansing.

It is a true story about folks who survive incredible hardship for a long time. You should read it.

Shackleton’s family motto was Fortitudine Vincimus — “by endurance we conquer”. I like this a lot.

On April 22nd, we celebrate the 14th anniversary of the incorporation of D-Wave. Over these past 14 years, nearly everything we’ve tried hasn’t worked. While we haven’t had to eat penguin (yet), and to my knowledge no amputations have been necessary, it hasn’t been a walk in the park. The first ten things you think of always turn out to be dead ends or won’t work for some reason or other.

Here I’m going to share an example of this with the sparse coding problem by describing two things we tried that didn’t work, and why.

Where we got to last time

In the last post, we boiled down the hardness of L0-norm sparse coding to the solution of a large number of QUBOs of the form

Find $\vec{w}$ that minimizes

$G(\vec{w}; \lambda) = \sum_{j=1}^{K} w_j [ \lambda + \vec{d}_j \cdot (\vec{d}_j -2 \vec{z}) ] + 2 \sum_{j \leq m}^K w_j w_m \vec{d}_j \cdot \vec{d}_m$

I then showed that using this form has advantages (at least for getting a maximally sparse encoding of MNIST) over the more typical L1-norm version of sparse coding.

I also mentioned that we used a variant of tabu search to solve these QUBOs. Here I’m going to outline two strategies we tried to use the hardware to beat tabu that ended up not working.

These QUBOs are fully connected, and the hardware isn’t

The terms in the QUBO that connect variables $j$ and $m$ are proportional to the dot product of the $j^{th}$ and $m^{th}$ dictionary atoms $\vec{d}_j$ and $\vec{d}_m$. Because we haven’t added any restrictions on what these atoms need to look like, these dot products can all be non-zero (the dictionary atoms don’t need to be, and in general won’t be, orthogonal). This means that the problems generated by the procedure are all fully connected — each variable is influenced by every other variable.

Unfortunately, when you build a physical quantum computing chip, this full connectivity can’t be achieved. The chip you get to work with connects any given variable with only a small number of other variables.

There are two ways we know of to get around the mismatch of the connectivity of a problem we want to solve, and the connectivity of the hardware. The first is called embedding, and the second is by using the hardware to perform a type of large neighborhood local search as a component of a hybrid algorithm we call BlackBox.

Solving problems by embedding

In a quantum computer, qubits are physically connected to only some of the other qubits. In the most recent spin of our design, each qubit is connected to at most 6 other qubits in a specific pattern which we call a Chimera graph. In our first product chip, Rainier, there were 128 qubits. In the current processor, Vesuvius, there are 512.

Chimera graphs are a way to use a regular repeating pattern to tile out a processor. In Rainier, the processor graph was a four by four tiling of an eight qubit unit cell. For Vesuvius, the same unit cell was used, but with an eight by eight tiling.

For a detailed overview of the rationale behind embedding, and how it works in practice for Chimera graphs, see here and here, which discuss embedding into the 128-qubit Rainier graph (Vesuvius is the same, just more qubits).

The short version is that an embedding is a map from the variables of the problem you wish to solve to the physical qubits in a processor, where the map can be one-to-many (each variable can be mapped to many physical qubits). To preserve the problem structure we strongly ‘lock together’ qubits corresponding to the same variable.

In the case of fully connected QUBOs like the ones we have here, it is known that you can always embed a fully connected graph with $K$ vertices into a Chimera graph with $(K-1)^2/2$ physical qubits — Rainier can embed a fully connected 17 variable graph, while Vesuvius can embed a fully connected 33 variable graph. Shown to the right is an embedding from this paper into Rainier, for solving a problem that computes Ramsey numbers. The processor graph where qubits colored the same represent the same computational variable.

So one way we could use Vesuvius to solve the sparse coding QUBOs is to restrict $K$ to be 33 or less and embed these problems. However this is unsatisfactory for two (related) reasons. The first is that 33 dictionary atoms isn’t enough for what we ultimately want to do (sparse coding on big data sets). The second is that QUBOs generated by the procedure I’ve described are really easy for tabu search at that scale. For problems this small, tabu gives excellent performance with a per problem timeout of about 10 milliseconds (about the same as the runtime for a single problem on Vesuvius), and since it can be run in the cloud, we can take advantage of massive parallellism as well. So even though on a problem by problem basis, Vesuvius is competitive at this scale, when you gang up say 1,000 cores against it, Vesuvius loses (because there aren’t a thousand of them available… yet ).

So this option, while we can do it, is out. At the stage we’re at now this approach can’t compete with cloud-enabled tabu. Maybe when we have a lot more qubits.

Solving sparse coding QUBOs using BlackBox

BlackBox is an algorithm developed at D-Wave. Here is a high level introduction to how it works. It is designed to solve problems where all we’re given is a black box that converts possible answers to binary optimization problems into real numbers denoting how good those possible answers are. For example, the configuration of an airplane wing could be specified as a bit string, and to know how ‘good’ that configuration was, we might need to actually construct that example and put it in a wind tunnel and measure it. Or maybe just doing a large-scale supercomputer simulation is enough. But the relationship between the settings of the binary variables and the quality of the answer in problems like this is not easily specified in a closed form, like we were able to do with the sparse coding QUBOs.

BlackBox is based on tabu search, but uses the hardware to generate a model of the objective function around each search point that expands possibilities for next moves beyond single bit flips. This modelling and sampling from hardware at each tabu step increases the time per step, but decreases the number of steps required to reach some target value of the objective function. As the cost of evaluating the objective function goes up, the gain in making fewer ‘steps’ by making better moves at each tabu step goes up. However if the objective function can be very quickly evaluated, tabu generally beats BlackBox because it can make many more guesses per unit time because of the additional cost of the BlackBox modeling and hardware sampling step.

BlackBox can be applied to arbitrary sized fully connected QUBOs, and because of this is better than embedding because we lose the restriction to small numbers of dictionary atoms. With BlackBox we can try any size problem and see how it does.

We did this, and unfortunately BlackBox on Vesuvius is not competitive with cloud-enabled tabu search for any of the problem sizes we tried (which were, admittedly, still pretty small — up to 50 variables). I suspect that this will continue to hold, no matter how large these problems get, for the following reasons:

1. The inherently parallel nature of the sparse coding problem ($S$ independent QUBOs) means that we will always be up against multiple cores vs. a small number of Vesuvius processors. This factor can be significant — for a large problem with millions of data objects, this factor can easily be in the thousands or tens of thousands.
2. BlackBox is designed for objective functions that are really black boxes, so that there is no obvious way to attack the structure of the problem directly, and where it is very expensive to evaluate the objective function. This is not the case for these problems — they are QUBOs and this means that attacks can be made directly based on this known fact. For these problems, the current version of BlackBox, while it can certainly be used, is not in its sweet spot, and wouldn’t be expected to be competitive with tabu in the cloud.

And this is exactly what we find — BlackBox on Vesuvius is not competitive with tabu on the cloud for any of the problem sizes we tried. Note that there is a small caveat here — it is possible (although I think unlikely) that for very large numbers of atoms (say low thousands) this could change, and BlackBox could start winning. However for both of the reasons listed above I would bet against this.

What to do, what to do

We tried both obvious tactics for using our gear to solve these problems, and both lost to a superior classical approach. So do we give up and go home? Of course not!

We shall go on to the end… we shall never surrender!!!

We just need to do some mental gymnastics here and be creative.

In both of the approaches above, we tried to shoehorn the problem our application generates into the hardware. Neither solution was effective.

So let’s look at this from a different perspective. Is it possible to restrict the problems generated by sparse coding so that they exactly fit in hardware — so that we require the problems generated to exactly match the hardware graph? If we can achieve this, we may be able to beat the classical competition, as we know that Vesuvius is many orders of magnitude faster than anything that exists on earth for the native problems it’s solving.

# Sparse coding on D-Wave hardware: some results

Last week I described two variants of sparse coding. One, which is commonly used, attempts to find a dictionary of atoms which can be used to sparsely reconstruct data, where the reconstructions are linear combinations of these atoms multiplied by real numbers, and the regularization term has the L1-norm form. We called this L1-norm sparse coding. I described the way we solve the optimization problem underlying this method.

The second approach is generally not used, and I believe very little is known about it. The procedure is identical to the usual approach except for one small change. Instead of real numbers, we attempt to reconstruct the data using a linear combination of dictionary atoms where the weights are binary, and we use L0-norm regularization. We called this L0-norm sparse coding.

To solve the L0-norm sparse coding optimization problem, we use the same procedure — called block coordinate descent — that was successfully used for the L1 version. The difference is that unlike the L1 version, where the step where optimization over the weights can be done efficiently, in the L0 version that optimization is NP-hard. The problem separates into $S$ independent problems of finding the optimal weights for each of the $S$ objects in our data set. These problems are of the form

Find $\vec{w}$ that minimizes

$G(\vec{w}; \lambda) = \sum_{j=1}^{K} w_j [ \lambda + \vec{d}_j \cdot (\vec{d}_j -2 \vec{z}) ] + 2 \sum_{j \leq m}^K w_j w_m \vec{d}_j \cdot \vec{d}_m$

which is a QUBO. So to perform the optimization over the weights, we need to solve $S$ independent QUBOs at each iteration of the block descent. There are many ways to do this, including using D-Wave hardware. The results I’ll show below were obtained using a variant of tabu search.

Here I’m going to show some interesting results on some differences between these two approaches on MNIST.

Some initial results

Sparse coding attempts to minimize the reconstruction error, which is the total summed difference between the initial data — the ground truth — and the system’s reconstructions of this data, using only linear combinations of the dictionary atoms. It does this subject to a condition which penalizes including too many of these atoms in any given reconstruction — this is the regularization term.

As the strength of the regularization increases — which means $\lambda$ is made bigger — fewer and fewer atoms can be used to reconstruct each data object. Here we define the average sparsity to be the average number of dictionary atoms used in the reconstructions.

For many reasons, it is desirable to have as low an average sparsity as we can get away with. Now what ‘we can get away with’ is highly context dependent. What we decide this means depends on what we are ultimately attempting to do. One of the ways we can decide what this means is to use these ideas for lossy compression – we want to find a way to represent our data set with as few bits as we can, and are willing to accept some user-defined amount of degradation in our data to achieve this. To do this, we attempt to do the following:

Find the value of $\lambda$ that minimizes average sparsity, subject to the requirement that the reconstruction error be lower than a user supplied threshold.

One of the interesting initial results of comparing the L0 and L1-norm versions of sparse coding is that the L0 version gives much sparser representations for the same reconstruction error. Here is a plot of the total reconstruction error over $S = 60,000$ MNIST training images as a function of the average number of atoms per image used in these reconstructions, for both L1 and L0-norm sparse coding. In this experiment we used $K = 64$ dictionary atoms, and the data objects were representations of the raw MNIST images including the first 30 SVD modes (so each image was represented in its ground truth using 30 real numbers). Each data point corresponds to a different value of $\lambda$, where the points to the left have larger $\lambda$ (more sparse) with $\lambda$ decreasing to the right (less sparse).

This is a very cool result. For the same reconstruction error, the L0 version requires roughly one half the number of atoms. In addition, the L1 version’s weights are real numbers, which require several bits to specify (up to 64, but likely in practice 8 would be enough), whereas by construction the L0 version’s weights are binary. So if our objective is compression, L0 wins both from the perspective of number of atoms required and the number of bits per atom.

If we represent the L1 weights using 8 bits, and use half as many to get the same quality reconstruction, this means that for the figure above L0 achieves about 16x more compression than L1 for the same reconstruction quality.

What a dictionary looks like

Once the sparse coding procedure is complete, we can look at the dictionary atoms. Shown here are the dictionary atoms found during an L0 run with $\lambda = 0.035$, which gave an average sparsity of 2.25 atoms / image. In the figure, the 64 dictionary atoms are shown in the top part. The bottom part shows the ground truth for the first 48 images of the 60,000 total. The middle part shows the reconstructions, and which dictionary atoms were used in these reconstructions. If more than two atoms were used, the notation is >>#Y, where Y is the number of atoms.

Next, let’s look at a close-up showing how the reconstructions work for three of the images. The way to understand the image below is that the atoms (on the right) are added together, and what this addition gives is the reconstruction (second column from left), which we can compare to the ground truth (leftmost column). One technical detail is that prior to doing the sparse coding procedure, we subtracted off the “average” value of the pixels over the entire 60,000 image data set, which gets added back here. So the reconstructions are always ‘average image’ + sum over the atoms in the reconstruction. If you look at the middle one (the nine), the difference between atom #3 and the reconstruction of the nine is the average image.

Next, let’s look at a dictionary obtained from the L1 procedure, with $\lambda = 0.25$ chosen to give roughly the same average sparsity as above (here the average number of atoms / image was 2.16). You can see with your eye that the jump in reconstruction error (from about 4,000 for L0 to about 6,500 for L1 in the first graph above) makes for poor reconstructions. Note that the L1 procedure doesn’t always give poor reconstructions — they can be quite excellent. You just need more atoms on average to get there.

This is great, but where’s the hardware fit in?

There is some debate about whether L0-norm sparse coding actually buys you anything over the L1 version in practice. While anecdotal results such as the above really don’t do much to settle this issue, it is nonetheless encouraging to see that recasting this basic workhorse procedure in this way does seem to provide a significant benefit — at least for compressing MNIST.

Of course the reason we’ve been thinking so hard about this issue is that we want to find problems we can apply quantum computation to. Given that the basic L0-norm sparse coding procedure seems to provide intriguing benefits over the L1 version, the second order question is whether quantum computers can be used to solve the optimization problems underlying this version more effectively than you could otherwise do. This will be the subject of my next post.

# Sparse coding on D-Wave hardware: finding weights is a QUBO

In my last post, I set up an optimization problem that we need to solve to do a particularly interesting kind of sparse coding. Solving this problem requires several steps. Some of these are easy and some are not. In this post I’ll tease out the core hard bit, and show that it’s of the kind that D-Wave hardware solves, which goes by several different names, including Quadratic Unconstrained Binary Optimization (QUBO), the Ising model, and weighted max-2-sat.

Warning: mathematics ahead is OT Level VII. It does get more gnarly than this but not by much.

.

Here is the optimization problem we came up with in the previous post.

L0-norm sparse coding

Given

1. A set of $S$ data objects $\vec{z}_s$, where each $\vec{z}_s$ is a real valued vector with $N$ components;
2. An $N x K$ real valued matrix $\hat{D}$, where $K$ is the number of dictionary atoms we choose, and we define its $k^{th}$ column to be the vector $\vec{d}_k$;
3. A $K x S$ binary valued matrix $\hat{W}$;
4. And a real number $\lambda$, which is called the regularization parameter,

Find $\hat{W}$ and $\hat{D}$ that minimize

$G(\hat{W}, \hat{D} ; \lambda) = \sum_{s=1}^S || \vec{z}_{s} - \sum_{k=1}^{K} w_{ks} \vec{d}_k ||^2 + \lambda \sum_{s=1}^S \sum_{k=1}^{K} w_{ks}$

L1-norm sparse coding

The L0 version of sparse coding differs from the way most people do sparse coding. Standard sparse coding assumes the weights $\hat{W}$ are real numbers, and the optimization problem is:

Find $\hat{W}$ and $\hat{D}$ that minimize

$G(\hat{W}, \hat{D} ; \lambda) = \sum_{s=1}^S || \vec{z}_{s} - \sum_{k=1}^{K} w_{ks} \vec{d}_k ||^2 + \lambda \sum_{s=1}^S \sum_{k=1}^{K} |w_{ks}|$

This looks very similar. The only difference is that all the entries in $\hat{W}$ are real, and the regularization term is now a sum over the absolute values of those entries. The real weight version is called L1-norm sparse coding, and the binary weight version is called L0-norm sparse coding.

You can think of the L0 version as only being able to either add or not add dictionary atoms to form a reconstruction, whereas the L1 version can add arbitrary “amounts” of the atoms.

Both the L0- and L1-norm sparse coding versions can be solved using the same procedure. Here it is.

Solving the sparse coding problem using block coordinate descent

1. First, we fix the weights $\hat{W}$ to some random initial configurations.
2. Assuming these fixed weights, we solve the optimization problem for the dictionary atoms $\hat{D}$.
3. Now we fix the dictionary atoms to these values, and find the optimal weights $\hat{W}$.

We then iterate steps 2 and 3 until $G$ converges to a minimum, which will be a global minimum for L1 and a local minimum for L0. Note that this basic procedure works for both versions. In step 1, the only difference is that the initial configurations have to be either real or binary. In step 2, the optimization over real valued $\hat{D}$ is identical for both versions — the algorithm we use is the one described here for finding the dictionary atoms.This algorithm is fast. It’s in step 3 where we find a very big difference in the difficulty of finding the optimal weights, given a dictionary.

Finding optimal weights, given a fixed dictionary

In the case where the weights are real, and the regularization is of the L1 form (sum over absolute values), we use the Feature Sign Search (FSS) algorithm described here. This algorithm is quite fast, and it has to be, because there are a total of $S$ of these that need to be solved to complete the step, and $S$ can be very large. An interesting observation is that all of these optimization problems are independent, and can therefore this can be efficiently parallellized. In order to perform this parallellization, we use the PiCloud python libraries, which allows us to run hundreds or thousands of parallel jobs to perform the optimization over the weights. As a rough estimate, for the optimization problems generated by MNIST, each optimization using FSS takes about 30 milliseconds, and there are 60,000 of these per iteration of the block descent procedure. If we run serially this is about 30 minutes per iteration. If we use 100 cores, we can send 600 jobs to each core, and get about 100x speed-up, taking the time down to about 20 seconds.

As an interesting aside, we find that our own Python implementation of FSS is about the same in terms of performance as the original MATLAB code provided by Honglak Lee. This was a little surprising as the core computations run in highly optimized compiled code inside MATLAB. This is evidence that the routines within numpy are competitive with MATLAB’s versions for the core FSS computations.

Now if we shift over to the L0 version, we have a different kind of optimization problem to solve. When the weights are bits, the problem we need to solve is NP-hard, and to do anything practical we’re going to need to use a heuristic solver. As in the case of the L1 version, we also get $S$ independent problems. So let’s just focus on one of these. If the dictionary is fixed, the problem we need to solve is:

Find $\vec{w}$ that minimizes

$G(\vec{w}; \lambda) = || \vec{z} - \sum_{k=1}^{K} w_{k} \vec{d}_k ||^2 + \lambda \sum_{k=1}^{K} w_{k}$

Expanding this out, and dropping constant terms, this can be rewritten as

$G(\vec{w}; \lambda) = \sum_{k=1}^{K} w_k [ \lambda + \vec{d}_k \cdot (\vec{d}_k -2 \vec{z}) ] + 2 \sum_{j \leq m}^K w_j w_m \vec{d}_j \cdot \vec{d}_m$

This is a QUBO.

Solving these QUBOs

This is Qubert. QUBOs are different.

There are several potential solver options we could use here, including running in D-Wave hardware. D-Wave processors are designed to solve QUBOs, and can natively solve the types of optimization problem described here. There are lots of other options also. Of all the conventional options we looked at, the solver that worked best was based on the tabu search algorithm.

So what we’ve accomplished here is to reduce the key step in the L0-norm sparse coding procedure to a set of QUBOs. In the next post, I’ll show some preliminary results of the comparative performance of the L0 and L1 versions on MNIST, show dictionaries generated by both, and introduce a variant that makes more direct use of D-Wave hardware.

# Sparse coding on D-Wave hardware: setting up the problem

Sparse coding is a very interesting idea that we’ve been experimenting with. It is a way to find ‘maximally repeating patterns’ in data, and use these as a basis for representing that data.

Some of what’s going to follow here is quite technical. However these are beautiful ideas. They are probably related to how human perception and cognition functions. I think sparse coding is much more interesting and important than, say, the Higgs Boson.

Everything starts from data

Twenty five data objects. Each is a 28×28 pixel greyscale image.

Sparse coding requires data. You can think of ‘data’ as a (usually large) set of objects, where the objects each can be represented by a list of real numbers. As as example, we’ll use the somewhat pathological MNIST handwritten digits data set. But you can use any dataset you can imagine. Each data object in this set is a small (28×28 pixel) greyscale image of a handwritten digit. A 28×28 pixel greyscale image can be represented by 28×28 = 784 numbers, each in the range 0..255. The training set has 60,000 of these.

We can represent the entire data set using a two-dimensional array, where there are 60,000 columns (one per image), and 784 rows (one for each pixel). Let’s call this array ${Z}_0$.

Technical detail: this thing is bigger than it has to be

What the first few MNIST images look like, keeping an increasingly large number of SVD modes.

One thing you may notice about MNIST is that each image kind of looks mostly similar. In fact we can exploit this to get a quick compression of the data. The trick we will use is called Singular Value Decomposition (SVD). SVD quickly finds a representation of the data that allows you to reduce its dimensionality. In the case of MNIST, it turns out that instead of using 784 pixel values, we can get away with using around 30 SVD modes instead, with only a small degradation in image quality. If we perform an SVD on ${Z}_0$ and keep only the parts corresponding to the largest 30 singular values, we get a new matrix ${Z}$ which still has 60,000 columns, but only 30 rows. Let’s call the $s^{th}$ column vector $\vec{z}_s$, which stores the $s^{th}$ compressed image. This trick, and others related to it, can always be used to pre-process any raw data we have.

The Dictionary

Let’s now create a small number — let’s say 32 — of very special images. The exact number of these actually matters quite a bit (it’s an example of something called a hyperparameter, many of which lurk in machine learning algorithms and are generally a giant pain in the ass), but the most important thing is that it should be larger than the dimension of the data (which in this case is 30). When this is the case, it’s called an overcomplete basis, which is good for reasons you can read about here.

These images will be in the exact same format as the data we’re learning over. We’ll put these in a new two dimensional array, which will have 32 columns (one for each image) and 30 rows (the same as the post-SVD compressed images above). We’ll call this array $\hat{D}$ the dictionary. Its columns are dictionary atoms. The $j^{th}$ dictionary atom is the column vector $\vec{d}_j$. These dictionary atoms will store the ‘maximally repeating patterns’ in our data that are what we’re looking for.

At this stage, we don’t know what they are — we need to learn them from the data.

The sparse coding problem

Now that we have a data array, and a placeholder array for our dictionary, we are ready to state the sparse coding problem.

Now to see how all this works, imagine we want to reconstruct an image (say $\hat{z}_s$) with our dictionary atoms. Imagine we can only either include or exclude each atom, and the reconstruction is a linear combination of the atoms. Furthermore, we want the reconstruction to be sparse, which means we only want to turn on a small number of our atoms. We can formalize this by asking for the solution of an optimization problem.

L0-norm sparse coding

Define a vector of binary (0/1) numbers $\vec{w}$ of length 32. Now solve this problem:

Find $\vec{w}$ and $\hat{D} = [\vec{d}_1 \vec{d}_2... \vec{d}_{31} \vec{d}_{32}]$ (remember the vectors $\vec{d}_k$ are column vectors in the matrix $\hat{D}$) that minimize

$G(\vec{w}, \hat{D} ; \lambda) = || \vec{z}_s - \sum_{k=1}^{32} w_k \vec{d}_k ||^2 + \lambda \sum_{k=1}^{32} w_k$

The real number $\lambda$ is called a regularization parameter (another one of those hyperparameters). The larger this number is, the bigger the penalty for adding more dictionary atoms to the reconstruction — the rightmost term counts the number of atoms used in the reconstruction. The first term is a measure of the reconstruction error. Minimizing this means minimizing the distance between the data (sometimes called the ground truth) $\vec{z}_s$ and the reconstruction of the image $\sum_{k=1}^{32} w_k \vec{d}_k$.

[Note that this prescription for sparse coding is different that the one typically used, where the weights $\vec{w}$ are real valued and the regularization term is of the L1 (sum over absolute values of the weights) form.]

You may see a simple way to globally optimize this. All you have to do is set $\vec{d}_1 = \vec{z}_s$, $\vec{d}_k = 0$ for all the other k, $w_1 = 1$ and $w_k = 0$ for all the other k — in other words, store the image in one of the dictionary atoms and only turn that one on to reconstruct. Then the reconstruction is perfect, and you only used one dictionary atom to do it!

OK well that’s useless. But now say we sum over all the images in our data set (in the case of MNIST this is 60,000 images). Now instead of the number of images being less than the number of dictionary atoms, it’s much, much larger. The trick of just ‘memorizing’ the images in the dictionary won’t work any more.

Now over all the data

Let’s call the total number of data objects $S$. In our MNIST case $S = 60,000$. We now need to find $\hat{W}$ (this is a 32xS array) and $\hat{D}$ that minimize

$G(\hat{W}, \hat{D} ; \lambda) = \sum_{s=1}^S || \vec{z}_{s} - \sum_{k=1}^{32} w_{ks} \vec{d}_k ||^2 + \lambda \sum_{s=1}^S \sum_{k=1}^{32} w_{ks}$

This is the full sparse coding prescription — in the next post I’ll describe how to solve it, what the results look like, and introduce a bunch of ways to make good use of the dictionary we’ve just learned!