Two interesting papers from the Ames crew

Hi everyone! Sorry for being silent for a while. Working. :-)

Two interesting papers appeared on the arxiv this week, both from people at Ames working on their D-Wave Two.

First: A Quantum Annealing Approach for Fault Detection and Diagnosis of Graph-Based Systems

Second: Quantum Optimization of Fully-Connected Spin Glasses

Enjoy!

Entanglement in a Quantum Annealing Processor

Figure 5A new paper published today in Phys Rev X. It demonstrates eight qubit entanglement in a D-Wave processor, which I believe is a world record for solid state qubits. This is an exceptional paper with an important result. The picture to the left measures a quantity that, if negative, verifies entanglement. The quantity s is the time — the quantum annealing procedure goes from the left to the right, with entanglement maximized near the area where the energy gap is smallest.

Here is the abstract:

Entanglement lies at the core of quantum algorithms designed to solve problems that are intractable by classical approaches. One such algorithm, quantum annealing (QA), provides a promising path to a practical quantum processor. We have built a series of architecturally scalable QA processors consisting of networks of manufactured interacting spins (qubits). Here, we use qubit tunneling spectroscopy to measure the energy eigenspectrum of two- and eight-qubit systems within one such processor, demonstrating quantum coherence in these systems. We present experimental evidence that, during a critical portion of QA, the qubits become entangled and entanglement persists even as these systems reach equilibrium with a thermal environment. Our results provide an encouraging sign that QA is a viable technology for large scale quantum computing.

Experimental signature of programmable quantum annealing

Some new science from users of the D-Wave One at USC ISI. Here’s the arxiv link. Here’s the abstract:

Quantum annealing is a general strategy for solving difficult optimization problems with the aid of quantum adiabatic evolution [1, 2]. Both analytical and numerical evidence suggests that under idealized, closed system conditions, quantum annealing can outperform classical thermalization-based algorithms such as simulated annealing [3, 4]. Do engineered quantum annealing devices effectively perform classical thermalization when coupled to a decohering thermal environment? To address this we establish, using superconducting flux qubits with programmable spin-spin couplings, an experimental signature which is consistent with quantum annealing, and at the same time inconsistent with classical thermalization, in spite of a decoherence timescale which is orders of magnitude shorter than the adiabatic evolution time. This suggests that programmable quantum devices, scalable with current superconducting technology, implement quantum annealing with a surprising robustness against noise and imperfections.

 

Catching quantum mechanics in the act…

So today D-Wave’s latest paper has been published in Nature. You can take a look at the abstract here (and the paper if you have a subscription!). The paper is entitled ‘Quantum annealing with manufactured spins’

So what is this new publication all about?

Manufactured, coupled quantum spins

Manufactured, coupled quantum spins

Everyone knows that when you observe a quantum computation, you destroy it, right? So how are you supposed to know if your quantum computer is working correctly? That’s what this latest Nature article from the scientists at D-Wave addresses. We’ve known for some time that the D-Wave quantum computers are performing computations, and we know that the answers they are giving us are correct, (they agree with our predictions).
.
But wouldn’t it be cool to be able to go further, to actually look INSIDE a quantum computer, with large numbers of qubits all interacting and computing, and catch the quantum mechanics in the act?

Not just a string of atoms

The first cool thing about this experiment is that the system under test this isn’t just the usual suspects found in quantum experiments – a string of atoms, a series of electron spins in a crystal, or a bunch of photons. It’s not a curiosity that scientists have found lurking in the natural world allowing them to observe some quantum mechanics. This is a processor! It is programmable – it actually solves problems, looks similar to the integrated circuits inside your laptop, and you can program it using Python! Anyway… I digress. What I mean to say is that it is very important to realise that these quantum effects are controllable. We’re no longer just looking at quantum systems – like atoms – and verifying their quantum nature. We’re taking those systems, and moulding and warping their energy levels, and controlling the way they interact with each other, so that we can use those quantum effects to help us compute.

Respecting the bigger picture

It’s fairly easy to isolate a single quantum bit and do some experiments on it to check that it is behaving quantum mechanically. It’s much harder to test that it’s STILL working quantum mechanically when it’s in the middle of an incredibly complex processor, connected to all kind of lines and electronics. It would be like designing a bridge that was able to support its own weight – but never considering what would happen when the bridge is used as it should be – with high volumes of traffic passing over it every day.
.
That’s the second cool thing about this result – during the experiment, the processor is operated in the same way as it is operated during problem solving. We didn’t have to do anything particularly esoteric to the qubits in order to watch them. We’re simply lifting the lid off the black box so we can take a peek at the quantum mechanics of the computation as it happens during normal problem solving.
.
In the experiment itself, this ‘black box’ is a subsection of the processor known as a unit cell. It is a fundamental block which is replicated and tiled together to form the larger processor. The unit cell tested contains 8 qubits, all linked together. There are 16 such unit cells in the current generation of D-Wave’s processors – known as the ‘Rainier’ architecture.

Quantum birdwatching

So how exactly do the scientists ‘watch’ the quantum mechanics? Well, the unit cell mentioned above is operated in the same way as it would be during a normal computation – running what is known as a quantum annealing algorithm. The difference is that at a certain point during the computation, the usually slow, careful annealing of the qubits is suddenly interrupted by a very fast signal. This signal causes the unit cell to ‘freeze’ in whatever state it was in at the time. If you repeat the computation lots of times, but each time apply your ‘freezing’ signal at a slightly different moment during the quantum computation, you can build up a series of ‘snapshots’, like stills on a movie reel. D-Wave scientists compiled all these snapshots to reveal exactly what is happening during the quantum computation.
.

The next step is to check that these results really do agree with what quantum mechanics tells us. So a theoretical model of the unit cell was set up, based on the predictions of quantum physics, and the model fits very well indeed. Even more interesting, a second model was set up, which captured how CLASSICAL physics predicts the processor should behave. The results were striking – the classical model wasn’t even close! There’s no way these results can be explained using classical physics.
.
This is a pretty awesome result for quantum computation in general. People have been worrying for a while that it may not be possible to ever build large scale quantum computing systems, that once we start putting those fragile qubits into a real processor environment that the quantum mechanics will be destroyed. The results from this latest paper reveal to us exactly the opposite – that quantum effects persist, and allow us to control them.
.

Maybe quantum mechanics isn’t so spooky after all. In fact, I’d say that the future of building large scale processors that operate using quantum mechanics looks more promising than ever.

We love ‘em short and fat

Geometrical dependence of low frequency noise in superconducting flux qubits

A general method for directly measuring the low-frequency flux noise (below 10 Hz) in compound Josephson junction superconducting flux qubits has been used to study a series of 85 devices of varying design. The variation in flux noise across sets of qubits with identical designs was observed to be small. However, the levels of flux noise systematically varied between qubit designs with strong dependence upon qubit wiring length and wiring width. Furthermore, qubits fabricated above a superconducting ground plane yielded lower noise than qubits without such a layer. These results support the hypothesis that localized magnetic impurities in the vicinity of the qubit wiring are a key source of low frequency flux noise in superconducting devices.

New paper on machine learning with AQC

Training a Binary Classifier with the Quantum Adiabatic Algorithm

Abstract: This paper describes how to make the problem of binary classification amenable to quantum computing. A formulation is employed in which the binary classifier is constructed as a thresholded linear superposition of a set of weak classifiers. The weights in the superposition are optimized in a learning process that strives to minimize the training error as well as the number of weak classifiers used. No efficient solution to this problem is known. To bring it into a format that allows the application of adiabatic quantum computing (AQC), we first show that the bit-precision with which the weights need to be represented only grows logarithmically with the ratio of the number of training examples to the number of weak classifiers. This allows to effectively formulate the training process as a binary optimization problem. Solving it with heuristic solvers such as tabu search, we find that the resulting classifier outperforms a widely used state-of-the-art method, AdaBoost, on a variety of benchmark problems. Moreover, we discovered the interesting fact that bit-constrained learning machines often exhibit lower generalization error rates. Changing the loss function that measures the training error from 0-1 loss to least squares maps the training to quadratic unconstrained binary optimization. This corresponds to the format required by D-Wave’s implementation of AQC. Simulations with heuristic solvers again yield results better than those obtained with boosting approaches. Since the resulting quadratic binary program is NP-hard, additional gains can be expected from applying the actual quantum processor.

Some new science

The Role of Single Qubit Decoherence Time in Adiabatic Quantum Computation

We numerically study the evolution of an adiabatic quantum computer in the presence of a Markovian ohmic environment. We consider Ising spin glass systems with up to 20 coupled qubits that are independently coupled to the environment via two conjugate degrees of freedom. We demonstrate that the required computation time in the presence of the environment is of the same order as that for an isolated system, and is not limited by the single qubit decoherence time T2*, even when the minimum gap is much smaller than temperature. We also show that the behavior of the system can be efficiently described by a two-state model with only longitudinal coupling to the environment.

The main result is summarized in the conclusions:

…we have explicitly demonstrated that the computation time in AQC can be much longer than single qubit decoherence time T2∗.

A visit to Harvard

Harvard in the fall is great. The place really has a unique feeling to it. Here it is.

harvard-business-school.jpg

I was recently visiting so that I could meet with some scientific collaborators there and to attend the teaching of the D-Wave case study at the business school.

Whenever I attend one of the case study analyses (the D-Wave case is part of the curriculum for MBAs at MIT-Sloan, HBS, Rotman, Michigan and a few other schools) I’m blown away by how quickly the students hone in on the real issues.

Coincidentally the focus of the discussion this time was how to architect “Big Science” projects properly, which was the subject of the SCE workshop keynote I posted yesterday.

One of the insights brought up during the class was that publicly funded “Big Science” projects have a very different participant selection algorithm than private efforts, and this may be the crux of why private big science works better. I am going to elaborate on this in a future post because I think it is counter intuitive but provides lessons for policy and management of big science.