New Nature Communications paper and a bonus NPR interview

Baths can be beneficial, even for qubits.

One of the most important things to try to understand about real quantum computers is how they behave in the presence of environments. Sometimes these environments are called ‘baths’ by physicists. I like this term because it’s really evocative of what’s physically going on. You can imagine any quantum system you’re building as always being ‘bathed’ in the glow of these environments.

It’s a very interesting fact that you can never get away from these baths, even in principle. No object in our universe — as far as we know — can be completely isolated from the rest of the universe. As Lawrence Krauss so eloquently describes, even ‘nothing’ is something.

Even if we were to build a quantum computer in the depths of interstellar space, and cool it to zero temperature, it would still be bathed in a bath formed of the virtual particles that boil and seethe in the fabric of space-time itself. There is no escape from our connections to the physical universe.

A Lovecraftian aside that has nothing to do with the paper or the NPR interview

By the way you Lovecraft fans out there — here is a famous bit from The Dream-Quest of Unknown Kadath:

[O]utside the ordered universe [is] that amorphous blight of nethermost confusion which blasphemes and bubbles at the center of all infinity—the boundless daemon sultan Azathoth, whose name no lips dare speak aloud, and who gnaws hungrily in inconceivable, unlighted chambers beyond time and space amidst the muffled, maddening beating of vile drums and the thin monotonous whine of accursed flutes.

“Azathoth has existed since the universe began. He dwells outside normal time and space. He is blind, idiotic, and indifferent.” Now go watch Krauss describe “Something from Nothing” and tell me they’re not talking about the same thing!

Lovecraft had an uncanny ability to grok modern concepts from physics and weave them into his stories. His descriptions of Azathoth, and the physics underlying Krauss’ explanations of what seems to be physically occurring deep inside the fabric of spacetime, are just too close to not point out. Of course they use different language. But think carefully about the context in which these ideas are being delivered. (Am I stretching making a connection between Krauss’ something that lives in nothing and Lovecraft’s description of Azathoth? Definitely. But I think it’s an interesting thing to think about how these two descriptions might not be incompatible.)

Back to qubits and baths

Anyway back to qubits and baths. This is not just fascinating science (although it is that). It is also a fundamentally important issue in constructing computing machines that harness quantum mechanics. Because all quantum systems MUST live in baths, it’s extremely important to understand in detail how these baths affect their behavior.

Not so long ago, it was suspected that these baths would always destroy the curious properties of quantum mechanics for large objects. But then this turned out to not be true. The first large objects where quantum behavior remained even in the presence of really big and hot baths were loops of superconducting metal — the great – great – great grandparents of our qubits.

Now the question of what effect these baths really have on large collections of large objects is being debated, and goes to the heart of many of the technical issues in building useful quantum computers.

The paper that just published

The paper that just published is called Thermally assisted quantum annealing of a 16-qubit problem.

It describes what I believe to be a key result in advancing this understanding. It looks very carefully at what happens to a quantum system in the presence of a bath, where both the quantum system and the bath have been exquisitely characterized. As was the case when macroscopic quantum coherence was first observed, the results are counter-intuitive.

Here is the abstract from the paper.

Efforts to develop useful quantum computers have been blocked primarily by environmental noise. Quantum annealing is a scheme of quantum computation that is predicted to be more robust against noise, because despite the thermal environment mixing the system’s state in the energy basis, the system partially retains coherence in the computational basis, and hence is able to establish well-defined eigenstates. Here we examine the environment’s effect on quantum annealing using 16 qubits of a superconducting quantum processor. For a problem instance with an isolated small-gap anticrossing between the lowest two energy levels, we experimentally demonstrate that, even with annealing times eight orders of magnitude longer than the predicted single-qubit decoherence time, the probabilities of performing a successful computation are similar to those expected for a fully coherent system. Moreover, for the problem studied, we show that quantum annealing can take advantage of a thermal environment to achieve a speedup factor of up to 1,000 over a closed system.

The key result is that for the specific type of bath acting on a real processor, the quantum effects required for quantum computation can successfully be tapped by protecting them in a specific way. Specifically — and this is a point that has caused much confusion — the decoherence time of the individual qubits, which is the time to decohere in the energy basis, does not set the timescale for losing quantum coherence in the measurement basis. Quantum coherence in the measurement basis (which is the resource tapped in this approach) is an equilibrium property of the system, as long as the bath is not so big and hot that well defined energy eigenstates disappear.

While the paper is primarily an experimental paper, the theory underlying all of this is very satisfactory in my view. Mohammad and his collaborators have developed a very good theoretical understanding of what really happens in real open quantum systems, and the agreement between these models and what is seen in the lab is striking.

So congratulations to all on this result.

The NPR interview and my proudness at working ‘meatiest’ into a national radio program

On a mostly unrelated note, here is a radio piece that Geoff Brumfiel of NPR did recently. It is of note because I managed to work in the word ‘meatiest’ into the discussion, of which I am understandably quite proud.

The Google / NASA Quantum Artificial Intelligence Lab

Update 20/05/2013: Here is how you can apply for time on the system. Exciting!

Update 16/05/2013: Here is some press coverage of the announcement.

When D-Wave was founded in 1999, our objective was to build the world’s first useful quantum computer.

The way I thought about it was that we’d have succeeded if: (a) someone bought one for more than \$10M; (b) it was clearly using quantum mechanics to do its thing; and (c) it was better at something than any other option available. Now all of these have been accomplished, and the original objectives that we’d set for ourselves have all been met.

A historic shot? Hartmut and friends at QIP-2010.

As the hardware matured, we began exploring ways to use its special capabilities. One of the first people I met who was also interested in this problem was Dr. Hartmut Neven, who works at Google. Hartmut is a world leading expert in computer vision, and believed that there might be a role for our technology in computer vision and more generally machine learning.

Machine learning is an important subfield of artificial intelligence. While it is very difficult to even define what intelligence is (there are even more definitions than for quantum computers), one thing that is pretty much universally recognized is that anything we’d call intelligent must be able to learn. Trying to understand how learning from experience works has driven a lot of progress in understanding how human perception and cognition might work.

The Quantum Artificial Intelligence Lab’s mandate is to bring the world’s best machine learning experts together with the world’s most advanced quantum computers, and perform thousands of experiments to explore to what extent machine intelligence and cognition can be advanced by using these new types of computers.

The quest to understand intelligence is one of the most interesting and important challenges that humanity has ever faced. It is a daunting problem. But so was building quantum computers, or even conventional computers for that matter. I believe we can apply the same principles we used to solve the quantum computing problem to the (much harder) problem of understanding how intelligence works.

First ever head to head win in speed for a quantum computer

Update 5/9/2013:

Update 5/15/2013:

• An update from the conference from Cathy: “Computing Frontiers 2013 Best Paper Award: Experimental Evaluation of an Adiabatic Quantum Computation System for Combinatorial Optimization, by McGeoch and Wang.” Congratulations to Cathy and Carrie!

AMHERST, Mass.A computer science professor at Amherst College who recently devised and conducted experiments to test the speed of a quantum computing system against conventional computing methods will soon be presenting a paper with her verdict: quantum computing is, “in some cases, really, really fast.”

“Ours is the first paper to my knowledge that compares the quantum approach to conventional methods using the same set of problems,” says Catherine McGeoch, the Beitzel Professor in Technology and Society (Computer Science) at Amherst. “I’m not claiming that this is the last word, but it’s a first word, a start in trying to sort out what it can do and can’t do.”

The quantum computer system she was testing, produced by D-Wave just outside Vancouver, BC, has a thumbnail-sized chip that is stored in a dilution refrigerator within a shielded cabinet at near absolute zero, or .02 degrees Kelvin in order to perform its calculations. Whereas conventional computing is binary, 1s and 0s get mashed up in quantum computing, and within that super-cooled (and non-observable) state of flux, a lightning-quick logic takes place, capable of solving problems thousands of times faster than conventional computing methods can, according to her findings.

Sparse coding on D-Wave hardware: structured dictionaries

The underlying problem we saw last time, that prevented us from using the hardware to compete with tabu on the cloud, was the mismatch of the connectivity of the problems sparse coding generates (which are fully connected) and the connectivity of the hardware.

The source of this mismatch is the quadratic term in the objective function, which is of the form $2 \sum_{j \leq m}^K w_j w_m \vec{d}_j \cdot \vec{d}_m$. The coupling terms are proportional to the dot product of the dictionary atoms.

Here’s an idea. What if we demand that $\vec{d}_j \cdot \vec{d}_m$ has to be zero for all pairs of variables $j$ and $m$ that are not connected in hardware? If we can achieve this structure in the dictionary, we get a very interesting result. Instead of being fully connected, the QUBOs with this restriction can be engineered to exactly match the underlying problem the hardware solves. If we can do this, we get closer to using the full power of the hardware.

L0-norm sparse coding with structured dictionaries

Here is the idea.

Given

1. A set of $S$ data objects $\vec{z}_s$, where each $\vec{z}_s$ is a real valued vector with $N$ components;
2. A $K x N$ real valued matrix $\hat{D}$, where $K$ is the number of dictionary atoms we choose, and we define its $k^{th}$ column to be the vector $\vec{d}_k$;
3. An $S x N$ binary valued matrix $\hat{W}$;
4. And a real number $\lambda$, which is called the regularization parameter,

Find $\hat{W}$ and $\hat{D}$ that minimize

$G(\hat{W}, \hat{D} ; \lambda) = \sum_{s=1}^S || \vec{z}_{s} - \sum_{j=1}^{K} w_{js} \vec{d}_j ||^2 + \lambda \sum_{s=1}^S \sum_{j=1}^{K} w_{js}$

subject to the constraints that $\vec{d}_j \cdot \vec{d}_m = 0$ for all pairs $j,m$ that are not connected in the quantum chip being used.

The only difference here from what we did before is the last sentence, where we add a set of constraints on the dictionary atoms.

Solving the sparse coding problem using block coordinate descent

We’re going to use the same strategy for solving this as before, with a slight change. Here is the strategy we’ll use.

1. First, we generate a random dictionary $\hat{D}$, subject to meeting the orthogonality constraints we’ve imposed on the dictionary atoms.
2. Assuming these fixed dictionaries, we solve the optimization problem for the dictionary atoms $\hat{W}$. These optimization problems are now Chimera-structured QUBOs that fit exactly onto the hardware by construction.
3. Now we fix the weights to these values, and find the optimal dictionary $\hat{D}$, again subject to our constraints.

We then iterate steps 2 and 3 until $G$ converges to a minimum.

Now we’re in a different regime than before — step 2 requires the solution of a large number of chimera-structured QUBOs, not fully connected QUBOs. So that makes those problems better fits to the hardware. But now we have to do some new things to allow for both steps 1 and 3, and these initial steps have some cost.

The first of these is not too hard, and introduces a key concept we’ll use for Step 3 (which is harder). In this post I’ll go over how to do Step 1.

Step 1: Setting up an initial random dictionary that obeys our constraints

Alright so the first step we need to do is to figure out under what conditions we can achieve Step 1.

There is a very interesting result in a paper called Orthogonal Representations and Connectivity of Graphs. Here is a short explanation of the result.

Imagine you have a graph on $V$ vertices. In that graph, each vertex is connected to a bunch of others. Call $p$ the number corresponding to the connectivity of the least connected variable in the graph. Then this paper proves that you can define a set of real vectors in dimension $V - p$ where non-adjacent nodes in the graph can be assigned orthogonal vectors.

So what we want to do — find a random dictionary $\hat{D}$ such that $\vec{d}_j \cdot \vec{d}_m = 0$ for all $k, m$ not connected in hardware — can be done if the length of the vectors $\vec{d}$ is greater than $V - p$.

For Vesuvius, the number $V$ is 512, and the lowest connectivity node in a Chimera graph is $p = 5$. So as long as the dimension of the dictionary atoms is greater than 512 – 5 = 507, we can always perform Step 1.

Here is a little more color on this very interesting result. Imagine you have to come up with two vectors $\vec{g}$ and $\vec{h}$ that are orthogonal (the dot product $\vec{g} \cdot \vec{h}$ is zero). What’s the minimum dimension these vectors have to live in such that this can be done? Well imagine that they both live in one dimension — they are just numbers on a line. Then clearly you can’t do it. However if you have two dimensions, you can. Here’s an example: $\vec{g} = \hat{x}$ and $\vec{h} = \hat{y}$. If you have more that two dimensions, you can also, and the choices you make in this case are not unique.

More generally, if you ask the question “how many orthogonal vectors can I draw in an $V$-dimensional space?”, the answer is $V$ — one vector per dimension. So that is a key piece of the above result. If we had a graph with $V$ vertices where NONE of the vertices were connected to any others (minimum vertex connectivity $p = 0$), and want to assign vectors to each vertex such that all of these vectors are orthogonal to all the others, that’s equivalent to asking “given a $V$-dimensional space, what’s the minimum dimension of a set of vectors such that they are all orthogonal to each other?”, and the answer is $V$.

Now imagine we start drawing edges between some of the vertices in the graph, and we don’t require that the vectors living on these vertices be orthogonal. Conceptually you can think of this as relaxing some constraints, and making it ‘easier’ to find the desired set of vectors — so the minimum dimension of the vectors required so that this will work is reduced as the graph gets more connected. The fascinating result here is the very simple way this works. Just find the lowest connectivity node in the graph, call its connectivity $p$, and then ask “given a graph on $V$ vertices, where the minimum connectivity vertex has connectivity $p$, what’s the minimum dimension of a set of vectors such that non-connected vertices in the graph are all assigned orthogonal vectors?”. The answer is $V - p$.

Null Space

Now just knowing we can do it isn’t enough. But thankfully it’s not hard to think of a constructive procedure to do this. Here is one:

1. Generate a matrix $\hat{D}$ where all entries are random numbers between +1 and -1.
2. Renormalize each column such that each column’s norm is one.
3. For each column in $\hat{D}$ from the leftmost to the rightmost in order, compute the null space of that column, and then replace that column with a random column written in the null space basis.

If you do this you will get an initial random orthonormal basis as required in our new procedure.

By the way, here is some Python code for computing a null space basis for a matrix $\hat{A}$. It’s easy but there isn’t a native function in numpy or scipy that does it.

import numpy
from scipy.linalg import qr

def nullspace_qr(A):

A = numpy.atleast_2d(A)
Q, R = qr(A.T)
ns = Q[:, R.shape[1]:].conj()
return ns

OK so step 1 wasn’t too bad! Now we have to deal with step 3. This is a harder problem, which I’ll tackle in the next post.

Some new Rainier science

Here is a short break from the sparse coding mayhem. A recent paper by some interesting folks appeared today on the arxiv. They ran some experiments on the Rainier-based system at USC.

Here is some of what they found:

Our experiments have demonstrated that quantum annealing with more than one hundred qubits takes place in the D-Wave One device… the device has sufficient ground state quantum coherence to realise a quantum annealing of a transverse field Ising model.

Here is a link to the arxiv paper.

Sparse coding on D-Wave hardware: things that don’t work

Ice, ice baby.

For Christmas this year, my dad bought me a book called Endurance: Shackleton’s Incredible Voyage, by Alfred Lansing.

It is a true story about folks who survive incredible hardship for a long time. You should read it.

Shackleton’s family motto was Fortitudine Vincimus — “by endurance we conquer”. I like this a lot.

On April 22nd, we celebrate the 14th anniversary of the incorporation of D-Wave. Over these past 14 years, nearly everything we’ve tried hasn’t worked. While we haven’t had to eat penguin (yet), and to my knowledge no amputations have been necessary, it hasn’t been a walk in the park. The first ten things you think of always turn out to be dead ends or won’t work for some reason or other.

Here I’m going to share an example of this with the sparse coding problem by describing two things we tried that didn’t work, and why.

Where we got to last time

In the last post, we boiled down the hardness of L0-norm sparse coding to the solution of a large number of QUBOs of the form

Find $\vec{w}$ that minimizes

$G(\vec{w}; \lambda) = \sum_{j=1}^{K} w_j [ \lambda + \vec{d}_j \cdot (\vec{d}_j -2 \vec{z}) ] + 2 \sum_{j \leq m}^K w_j w_m \vec{d}_j \cdot \vec{d}_m$

I then showed that using this form has advantages (at least for getting a maximally sparse encoding of MNIST) over the more typical L1-norm version of sparse coding.

I also mentioned that we used a variant of tabu search to solve these QUBOs. Here I’m going to outline two strategies we tried to use the hardware to beat tabu that ended up not working.

These QUBOs are fully connected, and the hardware isn’t

The terms in the QUBO that connect variables $j$ and $m$ are proportional to the dot product of the $j^{th}$ and $m^{th}$ dictionary atoms $\vec{d}_j$ and $\vec{d}_m$. Because we haven’t added any restrictions on what these atoms need to look like, these dot products can all be non-zero (the dictionary atoms don’t need to be, and in general won’t be, orthogonal). This means that the problems generated by the procedure are all fully connected — each variable is influenced by every other variable.

Unfortunately, when you build a physical quantum computing chip, this full connectivity can’t be achieved. The chip you get to work with connects any given variable with only a small number of other variables.

There are two ways we know of to get around the mismatch of the connectivity of a problem we want to solve, and the connectivity of the hardware. The first is called embedding, and the second is by using the hardware to perform a type of large neighborhood local search as a component of a hybrid algorithm we call BlackBox.

Solving problems by embedding

In a quantum computer, qubits are physically connected to only some of the other qubits. In the most recent spin of our design, each qubit is connected to at most 6 other qubits in a specific pattern which we call a Chimera graph. In our first product chip, Rainier, there were 128 qubits. In the current processor, Vesuvius, there are 512.

Chimera graphs are a way to use a regular repeating pattern to tile out a processor. In Rainier, the processor graph was a four by four tiling of an eight qubit unit cell. For Vesuvius, the same unit cell was used, but with an eight by eight tiling.

For a detailed overview of the rationale behind embedding, and how it works in practice for Chimera graphs, see here and here, which discuss embedding into the 128-qubit Rainier graph (Vesuvius is the same, just more qubits).

The short version is that an embedding is a map from the variables of the problem you wish to solve to the physical qubits in a processor, where the map can be one-to-many (each variable can be mapped to many physical qubits). To preserve the problem structure we strongly ‘lock together’ qubits corresponding to the same variable.

In the case of fully connected QUBOs like the ones we have here, it is known that you can always embed a fully connected graph with $K$ vertices into a Chimera graph with $(K-1)^2/2$ physical qubits — Rainier can embed a fully connected 17 variable graph, while Vesuvius can embed a fully connected 33 variable graph. Shown to the right is an embedding from this paper into Rainier, for solving a problem that computes Ramsey numbers. The processor graph where qubits colored the same represent the same computational variable.

So one way we could use Vesuvius to solve the sparse coding QUBOs is to restrict $K$ to be 33 or less and embed these problems. However this is unsatisfactory for two (related) reasons. The first is that 33 dictionary atoms isn’t enough for what we ultimately want to do (sparse coding on big data sets). The second is that QUBOs generated by the procedure I’ve described are really easy for tabu search at that scale. For problems this small, tabu gives excellent performance with a per problem timeout of about 10 milliseconds (about the same as the runtime for a single problem on Vesuvius), and since it can be run in the cloud, we can take advantage of massive parallellism as well. So even though on a problem by problem basis, Vesuvius is competitive at this scale, when you gang up say 1,000 cores against it, Vesuvius loses (because there aren’t a thousand of them available… yet ).

So this option, while we can do it, is out. At the stage we’re at now this approach can’t compete with cloud-enabled tabu. Maybe when we have a lot more qubits.

Solving sparse coding QUBOs using BlackBox

BlackBox is an algorithm developed at D-Wave. Here is a high level introduction to how it works. It is designed to solve problems where all we’re given is a black box that converts possible answers to binary optimization problems into real numbers denoting how good those possible answers are. For example, the configuration of an airplane wing could be specified as a bit string, and to know how ‘good’ that configuration was, we might need to actually construct that example and put it in a wind tunnel and measure it. Or maybe just doing a large-scale supercomputer simulation is enough. But the relationship between the settings of the binary variables and the quality of the answer in problems like this is not easily specified in a closed form, like we were able to do with the sparse coding QUBOs.

BlackBox is based on tabu search, but uses the hardware to generate a model of the objective function around each search point that expands possibilities for next moves beyond single bit flips. This modelling and sampling from hardware at each tabu step increases the time per step, but decreases the number of steps required to reach some target value of the objective function. As the cost of evaluating the objective function goes up, the gain in making fewer ‘steps’ by making better moves at each tabu step goes up. However if the objective function can be very quickly evaluated, tabu generally beats BlackBox because it can make many more guesses per unit time because of the additional cost of the BlackBox modeling and hardware sampling step.

BlackBox can be applied to arbitrary sized fully connected QUBOs, and because of this is better than embedding because we lose the restriction to small numbers of dictionary atoms. With BlackBox we can try any size problem and see how it does.

We did this, and unfortunately BlackBox on Vesuvius is not competitive with cloud-enabled tabu search for any of the problem sizes we tried (which were, admittedly, still pretty small — up to 50 variables). I suspect that this will continue to hold, no matter how large these problems get, for the following reasons:

1. The inherently parallel nature of the sparse coding problem ($S$ independent QUBOs) means that we will always be up against multiple cores vs. a small number of Vesuvius processors. This factor can be significant — for a large problem with millions of data objects, this factor can easily be in the thousands or tens of thousands.
2. BlackBox is designed for objective functions that are really black boxes, so that there is no obvious way to attack the structure of the problem directly, and where it is very expensive to evaluate the objective function. This is not the case for these problems — they are QUBOs and this means that attacks can be made directly based on this known fact. For these problems, the current version of BlackBox, while it can certainly be used, is not in its sweet spot, and wouldn’t be expected to be competitive with tabu in the cloud.

And this is exactly what we find — BlackBox on Vesuvius is not competitive with tabu on the cloud for any of the problem sizes we tried. Note that there is a small caveat here — it is possible (although I think unlikely) that for very large numbers of atoms (say low thousands) this could change, and BlackBox could start winning. However for both of the reasons listed above I would bet against this.

What to do, what to do

We tried both obvious tactics for using our gear to solve these problems, and both lost to a superior classical approach. So do we give up and go home? Of course not!

We shall go on to the end… we shall never surrender!!!

We just need to do some mental gymnastics here and be creative.

In both of the approaches above, we tried to shoehorn the problem our application generates into the hardware. Neither solution was effective.

So let’s look at this from a different perspective. Is it possible to restrict the problems generated by sparse coding so that they exactly fit in hardware — so that we require the problems generated to exactly match the hardware graph? If we can achieve this, we may be able to beat the classical competition, as we know that Vesuvius is many orders of magnitude faster than anything that exists on earth for the native problems it’s solving.