Generally I try to avoid commenting on ongoing scientific debates. My view is that good explanations survive scrutiny, and bad explanations do not, and that our role in bringing quantum computers kicking and screaming into the world is to, well, build quantum computers. If people love what we build, and we do everything we can to adjust our approach to make better and better gear for the people who love our computers, we have succeeded and I sleep well.
I am going to make an exception here. Many people have asked me specifically about the recent Shin et. al. paper, and I’d like to give you my perspective on it.
Science is about good explanations
In my world view, science is fundamentally about good explanations.
What does this mean? David Deutsch eloquently describes this point of view in this TED talk. There is a transcript here. He proposes and defends the idea that progress comes from discovering good explanations for why things are the way they are. You would not be reading this right now if we had not come up with good explanations for what electrons are.
We can and should directly apply these ideas to the question of whether D-Wave processors are quantum or classical. From my perspective, if the correct explanation were ‘it’s classical’, that would be critical to know as quickly as possible, because we could then identify why this was so, and attempt to fix whatever was going wrong. That’s kind of my job. So I need to really understand this sort of thing.
Here are two competing explanations for experiments performed on D-Wave processors.
Explanation #1. D-Wave processors are inherently quantum mechanical, and described by open quantum systems models where the energy scale of the noise is much less than the energy scale of the central quantum system.
Explanation #2. D-Wave processors are inherently classical, and can be described by a classical model with no need to invoke quantum mechanics.
The Shin et. al. paper claims that Explanation #2 is a correct explanation of D-Wave processors. Let’s examine that claim.
Finding good explanations for experimental results
It is common practice that whenever an experiment is reported demonstrating quantum mechanical (or in general non-classical) effects, researchers look for classical models that can provide the same results. A successful theory, however, needs to explain all existing experimental results and not just a few select ones. For example, the classical model of light with the assumption of ether could successfully explain many experiments at the beginning of the 20th century. Only a few unexplained experiments were enough to lead to the emergence of special relativity.
In the case of finding good explanations for the experimental results available for D-Wave hardware, there is a treasure trove of experimental data available. Here is just a small sample. There are experimental results available on single qubits (Macroscopic Resonant Tunneling & Landau-Zener), two qubits (cotunneling) and multiple qubits (now up to about 500) (the eight qubit Nature paper, entanglement, results at 16 qubits, the Boixo et.al. paper).
Let’s see what we get when we apply our two competing explanations of what’s going on inside D-Wave processors to all of this data.
If we assume Explanation #1, we find that a single simple quantum model perfectly describes every single experiment ever done. In the case of the simpler data sets, experimental results agree with quantum mechanics with no free parameters, as it is possible to characterize every single term in the system’s Hamiltonian, including the noise terms.
Explanation #2 however completely fails on every single experiment listed above, except for the Boixo et.al. data (I’ll give you an explanation of why this is shortly). In particular, the eight qubit quantum entanglement measured in Lanting et al. can never be explained by such a model, which rules it out as an explanation of the underlying behavior of the device. Note that this is a stronger result than it’s simply a bad explanation — the model proposed in Shin et. al. makes a prediction about an experiment that you can easily perform on D-Wave processors that contradicts what is observed.
Why the model proposed works in describing the Boixo et.al. data
Because the Shin et. al. model makes predictions that contradict the experimental data for most of the experiments that have been performed on D-Wave chips, it is clearly not a correct explanation of what’s going on inside the processors. So what’s the explanation for the agreement in the case of the Boixo paper? Here’s a possibility, which we can test.
The experiment performed in the Boixo et. al. paper considered a specific use of the processors. This use involved solving a specifically chosen type of problem. It turns out that for this type of problem, multi-qubit quantum dynamics and therefore entanglement are not necessary for the hardware to reach good solutions. In other words, for this experiment, a Bad Explanation (a classical model) can be concocted that matches the results of a fully quantum system.
To be more specific, the Shin et. al. model replaces terms like with , where is a Pauli matrix and is the quantum average of . Since all quantum correlations are gone after such averaging, you can model as a classical magnetic moment in a 2D plane. But now it is clear that any experiments relying on multi-qubit quantum correlation and entanglement cannot be explained by this simple model.
I’ve proposed an explanation for the agreement between the Shin et.al. model and this particular experiment — that the hardware is fundamentally quantum, but for the particular problem type run, this won’t show up because the problem type is ‘easy’ (in the sense that good solutions can be found without requiring multi-qubit dynamics, and an incorrect classical model can be proposed that nevertheless agrees with the experimental data).
How do we test this explanation? We change the problem type to one where a fundamental difference in experimental outcome between the processor hardware and any classical model is expected. If the Shin et. al. model continues to describe what is observed in that situation, then we have a meaningful result that disagrees with the ‘hardware is quantum’ explanation. If it disagrees with experiment, that supports the ‘hardware is quantum’ and the ‘type of problem originally studied is expected to show the same experimental results for quantum and classical models so it’s just a bad choice if that’s your objective’ explanations.
So a very important test to help determine what is truly going on is to make this change, measure the results and see what’s up. I believe that some of the folks working on our systems are doing this now. Looking forward to seeing the results!
The best explanation we have now is that D-Wave processors are beautifully quantum mechanical
The explanation that D-Wave processors are fundamentally quantum mechanical beautifully explains every single experiment that has ever been performed on them. The degree of agreement is astonishing. The results on the smallest systems, such as the individual qubits, are like nothing I’ve ever seen in terms of agreement of theory and experiment. Some day these will be in textbooks as examples of open quantum systems.
No classical model has ever been proposed that simultaneously explains all of the experiments listed above.
The specific model proposed in Shin et.al. focuses only on one experiment for which there was no expectation of an experimental difference between quantum and classical models and completely (and from my perspective disingenuously) ignores the entire remainder of the mountains of experimental data on the device.
For these reasons, the Shin et.al. results have no validity and no importance.
As an aside, I was disappointed when I saw what they were proposing. I had heard through the grapevine that Umesh Vazirani was preparing some really cool classical model that described the data referred to above and I was actually pretty excited to see it.
When I saw how trivially wrong it was it was like opening a Christmas present and getting socks.
There was a really interesting paper posted on the arxiv yesterday, coauthored by Peter Shor and Eddie Farhi. It analyzes ways you can adjust adiabatic quantum optimization algorithms to make them run better. There are some very good ideas here — check it out!
Also on the arxiv recently was this cool paper by Andrew Lucas at Harvard, mapping a lot of NP problems into Ising model problems.
There are a lot of physical neural nets on planet Earth. Just the humans alone account for about 7.139 billion of them. You have one, hidden in close to perfect darkness inside your skull — a complex graph with about 100 billion neurons and 0.15 quadrillion connections between those neurons.
Of course we’d like to be able to build machines that do what that lump of squishy pink-gray goo does. Mostly because it’s really hard and therefore fun. But also because having an army of sentient robots would be super sweet. And it seems sad that all matter can’t be made aware of its own mortality and suffer the resultant existential angst. Stupid 5 billion year old rocks. See how smug you are when you learn about the heat death of the universe.
One thing that is also hard, but not that hard, is trying to build different kinds of physical neural nets that are somewhat inspired by our brains. ‘Somewhat inspired’ is a little vague. We don’t actually understand a lot about how brains actually work. But we know a bit. In some cases, such as our visual perception system, we know quite a bit. This knowledge has really helped the algorithmic side of building better and better learning systems.
So let’s explore engineering our own non-biological but biologically inspired physical neural nets. Does this idea make sense? How would we use such things?
Training a Deep Boltzmann Machine
One kind of neural net that’s quite interesting is a Deep Boltzmann Machine (DBM). Recall that a DBM can be thought of as a graph comprising both visible and hidden units. The visible units act as an interface layer between the external universe that the DBM is learning from, and the hidden units which are used to build an internal representation of the DBM’s universe.
A method for training a DBM was demonstrated in this paper. As we discussed earlier, the core mathematical problem for training a DBM is sampling from two different distributions — one where the visible units are clamped to data (the Creature is ‘looking at the world’), and one where the entire network is allowed to run freely (the Creature is ‘dreaming about the world’). In the general case, this is hard to do because the distributions we need to sample from are Boltzmann distributions over all the unclamped nodes of the network. In practice, the connectivity of the graph is restricted and approximate techniques are used to perform the sampling. These ideas allow very large networks to be trained, but this comes with a potentially serious loss of modeling efficiency.
Using physical hardware to perform the sampling steps
Because the sampling steps are a key bottleneck for training DBMs, maybe we could think of a better way to do it. What if we built an actual physical neural net? Could we design something that could do this task better than the software approaches typically used?
Here’s the necessary ingredients:
- A two-state device that would play the part of the neurons
- The ability to locally programmatically bias each neuron to preferentially be in either of their states
- Communications channels between pairs of neurons, where the relative preference of the pair could be set programmatically
- The ability of the system to reach thermal equilibrium with its environment at a temperature with energy scale comparable to the energy scales of the individual neurons
- The ability to read out each neuron’s state with high fidelity
If you had these ingredients, you could place the neurons where you wanted them for your network; connect them like you want for your network; program in their local biases and connection weights; allow them to reach thermal equilibrium (i.e. reach a Boltzmann distribution); and then sample by measuring their states.
The key issue here is step 4. The real question, which is difficult to answer without actually building whatever you have in mind, is whether or not whatever the distribution you get in hardware is effective for learning or not. It might not be Boltzmann, because the general case takes exponential time to thermally equilibrate. However the devil is in the details here. The distribution sampled from when alternating Gibbs sampling is done is also not Boltzmann, but it works pretty well. A physical system might be equilibrated well enough by being smart about helping it equilibrate, using sparsely connected graphs, principles like thermal and / or quantum annealing, or other condensed matter physics / statistical mechanics inspired tricks.
The D-Wave architecture satisfies all five of these requirements. You can read about it in detail here. So if you like you can think of that particular embodiment in what follows, but this is more general than that. Any system meeting our five requirements might also work. In the D-Wave design, the step 4 equilibration algorithm is quantum annealing in the presence of a fixed physical temperature and a sparsely locally connected hardware graph, which seems to work very well in practice.
One specific idea for doing this
Let’s focus for a moment on the Vesuvius architecture. Here’s what it looks like for one of the chips in the lab. The grey circles are the qubits (think of them as neurons in this context) and the lines connecting them are the programmable pairwise connection strengths (think of them as connection strengths between neurons).
There are about 500 neurons in this graph. That’s not very many, but it’s enough to maybe do some interesting experiments. For example, the MNIST dataset is typically analyzed using 784 visible units, and a few thousand hidden units, so we’re not all that far off.
Here’s an idea of how this might work. In a typical DBM approach, there are multiple layers. Each individual layers has no connections within it, but adjacent layers are fully connected. Training proceeds by doing alternating Gibbs sampling between two sets of bipartite neurons — none of the even layer neurons are connected, none of the odd layer neurons are connected, but there is dense connectivity between the two groups. The two groups are conditionally independent because of the bipartite structure.
We could try the following. Take all of the neurons in the above graph, and ‘stretch them out’ in a line. The vertices will then have the connections from the above graph. Here’s the idea for a smaller subgraph comprising a single unit cell so you can get the idea.
If you do this with the entire Vesuvius graph, the resultant building block is a set of about 500 neurons with sparse inter-layer connectivity with the same connectivity structure as the Vesuvius architecture.
If we assume that we can draw good Boltzmann-esque samples from this building block, we can tile out enough of them to do what we want using the following idea.
To train this network, we do alternating Gibbs sampling as in a standard DBM, but using the probability distributions obtained by actually running the Vesuvius graph in hardware (biased suitably by the clamped variables) instead of the usual procedure.
What might this buy us?
Alright so let’s imagine we could equilibrate and draw samples from the above graph really quickly. What does this buy us?
Well the obvious thing is that you can now learn about possible inter-layer correlations. For example, in an image, we know that pixels have local correlations — pixels that are close to each other in an image will tend to be correlated. This type of correlation might be very useful for our model to be able to directly learn. This is the sort of thing that inter-layer correlations within the visible layer might be useful for.
Another interesting possibility is that these inter-layer connections could represent the same input but at different times, the intuition being that inputs that are close in time are also likely to be correlated.
OK well why don’t you try it out?
That is a fabulous idea! I’m going to try this on MNIST and see if I can make it work. Stand by!