Here is a paper we’re posting to the arxiv shortly. The abstract:

We have studied the decoherence properties of adiabatic quantum computation in the presence of in general non-Markovian (e.g., low-frequency) noise. We show that the global scheme of adiabatic quantum computation maintains its performance even for strong decoherence. The more efficient local adiabatic computation, however, does not improve scaling of the computation time with the number of qubits n as in the decoherence-free case, although it does provide some “prefactor” improvement. The scaling improvement requires phase coherence throughout the computation, limiting the computation time and the problem size n.

What it claims is that the effect of strong decoherence on AQC is to (generically) square the time it takes an optimal algorithm to operate. This wipes out quadratic speed-ups (like adiabatic Grover search), but implies that exponential gains remain exponential for AQC even with total loss of phase coherence.

### Like this:

Like Loading...

*Related*

Have you found any algorithms (other than factoring and everything related to that) with exponential gains?

Sure, the Abrams-Lloyd algorithm (the guts of the quantum chemistry stuff we did with the guys at Berkeley).

Also I should point out that the analysis isn’t directly applicable to the more general quantum annealing algorithms, it’s specific to AQC (where success requires ending up in the ground state).

“implies that exponential gains remain exponential for AQC even with total loss of phase coherence.”

Must say I don’t get that statement at all. Why is the error model considered here the one which is relevant to universal AQC?

To answer Dave’s question, in this work we considered a wide range of classical and quantum noise models which include bosonic environment (phonons or electromagnetic noise) and also spin environment. We also considered the most important type of error in AQC, which is excitation at the energy anticrossing between the lowest two energy levels. Every other type of error in AQC is suppressed by a much larger gap. What we found is that decoherence broadens the resonance region. Since for the global AQC only the integral of the transition rate matters, the broadening doesn’t affect the probability of excitation. On the other hand, for local AQC the width of the broadening plays an important role making it very sensitive to decoherence. Indeed, the computation time in local AQC is limited by the decoherence time the same way as it is in gate model QC. The fact that Landau-Zener transition (i.e., global AQC) is insensitive to phase coherence has also been discussed before us by others, see e.g., Saito et al. arXiv:cond-mat/0703596, in which they also consider a completely general noise model.

Let me clarify something very important here, that is the concept of “phase coherence”, which might mean different things to different people. In general, phase coherence means that the relative phases between different components of the wave function are correlated at different times. This therefore depends on the basis you define those components in. We, condensed matter physicists, usually use energy basis as our default basis, therefore by decoherence we mean loosing phase coherence between different energy levels. Quantum information people, as I understand, tend to think in terms of computation basis. Therefore by decoherence they may mean completely loosing superposition in the computation basis. If that is what they mean by phase coherence, then I agree with them that any type of quantum computation, including AQC, is impossible without such coherence. But that is not what we meant by decoherence in our paper.

In AQC, since the wave function has only one component (in the energy basis) i.e., the ground state, the relative phase and phase coherence is automatically irrelevant. Coherence therefore will only matter when the lowest two energy levels get so close to each other that the wave function becomes (coherent or incoherent) mixture of the two. In that case decoherence does what one expects it to do, i.e., localizing the wave function to one of the two (quasi-classical) branches of energy that meet at the anticrossing. It will therefore become harder and harder for the system to tunnel from one branch to the other. The tunneling rate will be inversely proportional to the decoherence rate.

At the same time, the environment will also broaden the transition region to a width proportional to decoherence rate. The reason for that is also simple because now the system has an environment to give energy to or gain energy from in order to satisfy energy conservation for jumping to the other branch. One can immediately see that these two effect cancel each other once integrating over time. (One is proportional and the other is inversely proportional to the decoherence rate.) As a result, the overall probability of transition remains unaffected by decoherence.

Thanks for the response Mohammad Amin. The nomenclature issue between fields is one that always causes headaches (and flame wars!)

Anyway, another question, you say “We also considered the most important type of error in AQC, which is excitation at the energy anticrossing between the lowest two energy levels. Every other type of error in AQC is suppressed by a much larger gap.” It’s not clear to me why limiting to a two level system is correct here. Why can’t there be a large number of nearly degenerate energy levels near the gap energy? Then won’t all of these be relevant?

But to the more interesting subject, which is the universal AQC constructions, you note that “phase coherence is automatically irrelevant” for AQC, which I agree with when AQC as it is used for adaibatic algorithms (and the final basis is “nice”), but I’m not so sure I agree with this for the universal adiabatic quantum computing constructions.

In particular consider the situation where you have a quenched disorder in you target Hamiltonian where each individual term in this Hamiltonian is not exactly what you want it to be. If you examine what effect this has on the ground state of the system, you will see that it can correspond to errors in the circuit you are simulated (the ground state is a coherent sum over the history of the circuit, errors like the one I just describe sometimes then are translated over to errors in the unitary gates you are trying to implement, such that the ground state no longer has a term which is the correct output of the circuit.) So while you may be perfectly sure you are start and end in the ground state, the problem is that the final state may not correctly implement the quantum circuit you are trying to simulate.

You can also do this without a fixed disordered Hamiltonian, of course, by adding an environment. Then the problem is basically that the final ground state will be energy broadened, and the states corresponding to this energy broadening does not correctly simulate the quantum circuit.

Hi Geordie,

You mention the Abrams-Lloyd algorithm as one that you can get an exponential speedup from; I’ve seen the paper describing the algorithm in the circuit model. This is possibly trivial, but do you know of any papers that explicitly describe the algorithm for an AQC? i.e. What do you define the starting and final Hamiltonians as, and so on.

Thanks,

Joe

Simulated Quantum Computation of Molecular Energies?

Joe: This is an excellent question. As you may know there are proofs of equivalence between AQC and gate model, that give a prescription for constructing a Hamiltonian for AQC given a series of gates using “clock qubits” etc. Unfortunately this construction is not very useful in practice. If you wanted to run a gate model algorithm on an AQC you almost certainly wouldn’t use that construction.

The guts of most gate model algorithms is phase estimation. I don’t recall ever seeing an AQC version of phase estimation that was practical, although it would be very valuable if someone could figure this out.

Your questions are very interesting and relevant Dave. To answer your first question, if the number of almost degenerate levels that are close to the ground state at the anticrossing is polynomial in n, then it will only suppress the ground state probability by a polynomial amount if thermalization between those levels occurs. This will only increase the computation time by a polynomial amount. If the number of those levels are exponential, then that will cause a serious problem. In general if the energy levels have Gaussian distribution, one would expect only polynomial number of levels to be packed near the ground state.

Your second remark is completely right. Any disorder in the final Hamiltonian can be translated into a gate error and therefore an error in the final answer. But this is equivalent to having gate inaccuracies in the gate model, so the two models are equally susceptible to this error. This is not a type of error that we studied in our work, rather we showed that global AQC has less coherence requirement than the gate model while the local one is the same as the gate model.

Broadening of the ground state can happen as a result of finite transition probability to other energy levels due to coupling to the environment. If the gap between the ground state and first excited state of the final Hamiltonian is large, especially larger than temperature, then such transitions will be suppressed. Therefore as long as your final ground state correctly represents the problem, you should be able to recover your answer with high fidelity.

Hi Geordie,

Thanks for getting back to me. Given what you say that the AQCcircuitQC equivalence paper constructions are “inefficient” for practically implementing phase estimation-based algorithms on AQC’s, I assume you guys are working out how to implement Abrams-Lloyd for AQCs (since, based on your marketing material, A-L would be the core of your computational chemistry offerings). Is this true? If so, would you care to mention how far you’ve come along? If not, do you plan on doing this, and if so, when?

I’ve assumed that when you say the constructions are “not very useful in practice”, it’s because they’re cumbersome/inefficient. Is this the case, or is there some other reason why the constructions are not practical?

Thanks,

Joe

Here, i have though. I read in internet, that to simulating quantum sistem, modeling molecules, for enough good modeling precision need 100 discreate values of flying elcetron around orbit for each direction. At all can be 100^3=1000000 values which can has electron. If here is two hidrogen atoms then electron can be in 10^12 positions, if there 3 H atoms, electron can has 10^18 positions and so on. Then my thought is, what dont nessesary how powerful QC will be, it never can’t simulate hidrogen atom with 100% preprecision.

BTW, does our bodies is quantum computers?

“the ground state at the anticrossing is polynomial in n”

Definitely if the degeneracy is polynomial, then you should be fine.

I think the problem is that, if I recall correctly, for problems like random instances of 3SAT near the hard/easy phase transition, there exist an exponential number of local minima which have an energy within a polynomial of the energy gap (i.e energy O(n)) Would this pose a problem?

“But this is equivalent to having gate inaccuracies in the gate model, so the two models are equally susceptible to this error” Indeed (and in, for example, ion traps, these are THE errors which are the worst at the moment) so you can take care of these errors in the standard way (albeit with some issues regarding reusability of ancillas.) I suspect the issue here is just nomenclature: even though you retain “global phase coherence” there are still errors to deal with for the universal AQC (there are also problems, I think with the clock in these constructions.)

Dave: In general, if there are exponentially large number of levels within energy distance k_BT from the ground state at any point of the evolution, that can cause serious problem. Such type of problems, which random 3SAT could be one of those, is therefore not suitable for AQC.

Regarding your second point, I agree that there can be several sources of error for universal AQC that deserve careful consideration. But not all of those can be categorized as decoherence which was the focus of our paper.

What does the adiabatic mean here?

please please punch me in the head repeatedly.

Hi Geordie

See http://www.walrusmagazine.com/articles/2007.08.10-quantum-conundrum/

For a discussion in The Walrus about quantum computing (related to an article which features you in the Sept 2007 edition of The Walrus), D-Wave and the nature of science research. The discussion just went live 10 minutes ago.

~Pat, digital editor