This article contains some very interesting observations about what generically makes certain optimization problem instances hard for AQC, and what can be done about this. Here is the abstract:

First order quantum phase transition is believed to create a gap in adiabatic quantum computation that is exponentially dependent on system parameters. Here, we investigate first order quantum phase transition by relating it to the properties of the local minima in the problem Hamiltonian. We use perturbation theory to predict the position and the size of the minimum spectral gap and show agreement with numerical calculations for an example of weighted maximum independent set problem. By tuning the parameters of the studied example, we demonstrate a controllable variation of the minimum gap size by several orders of magnitude.

[arxiv: 0904.1387]

### Like this:

Like Loading...

*Related*

Neat stuff. I’m always curious, though, what effect adding an arbitrary number of global minimum degeneracies or near-degeneracies would be. (Forgive me if it’s something obvious to a quantum physicist, since as you know, I’m not one.) It’s quite easy to add either, but every paper I recall assumes no degeneracy, and this one talks mostly about when the first excited state is a local minimum far from the global minimum.

Would it help to add many “fake” states that are single bit-flips from the global minimum and that are lower than any local minima? How about adding “fake” states that are local minima two bit-flips away from the global minimum and have lower energy than any other local minima? Either of these can be made such that the global minimum is easy to find if the system ends up in one of the fake states. They both take at most twice as many logical qubits as before, and I think that it could be reduced to a constant number of additional logical qubits (though with fewer fake states).

Neil,

“Would it help to add many “fake” states that are single bit-flips from the global minimum and that are lower than any local minima?”

So you want to compensate for… what would you call it, entropic clustering?… of states around local minima by adding “fake states” around the global minima? How does one do something like this ab initio?

Perhaps it would be possible to see if you’re in such a local minima by analyzing the outputs for repeated experimental runs on an actual AQC? I wonder if you could then recast the formulation for the problem you’re trying to solve a few times so you have a chance of finding a unique global minima you can be confident about?

I’m probably speaking nonsense. 🙂

I think there are many ways you can use these results to create better algorithms. One of the results that was really striking to me was Fig3b. This shows that changing the w_L parameter (the weights on the 9 outer vertices) from 1.9 to 1.5 changes the minimum gap g_m by about 6 orders of magnitude, and therefore the run time of the algorithm t_f ~ 1/g_m^2 by 12 orders of magnitude! This point is something that Vicky has been thinking about for a while–that even though two different problem Hamiltonians can have the same ground states, that definitely does not mean that their minimum gaps (and therefore runtimes in an adiabatic algorithm) are the same. Even parameter strength variations (like the w_L example) matter a lot.

@Robert: The easiest way to add a degeneracy on the global minimum is to add a degeneracy on every state by adding one more bit that doesn’t really do anything, i.e. it could be 0 or 1 and it doesn’t change anything. You can make it a near-degeneracy by very slightly favouring it being 0. Intuitively, though, it seems like that really shouldn’t affect the outcome.

Alternatively, you can add a bit B that acts exactly like a bit A and enforce that they can’t both be 1, but that only works if A is 1 in the original global minimum. So, add a bit C that also acts exactly like bit A but enforce that A and C can’t both be 0. This leaves the possibilities of ABC = 001, 011, 100, or 101. If A was 0 in the global minimum, 001 and 100 both have the same energy in the new system, so the new global minimum is degenerate. If A was 1 in the global minimum, 011 and 101 both have the same energy in the new system, so the new global minimum is degernate. To make 2^n degeneracies, add 2n bits that do the same trick for every original bit. Just skew it a bit to make them near-degeneracies, in which case the ones that don’t end up being the global minimum would be local minima just a few bit flips away from the global minimum and with energy just slightly higher. Plus, if you end up in any of them, you can still find the original global minimum in O(n) time. This seems a bit more likely to have an effect on the system, because the new bits are strongly related to the old bits. It may just do nothing of course, but I haven’t seen anything looking at the possibility.

@Geordie: It’s certainly a very interesting result, and quantitatively surprising, though I don’t find it too qualitatively surprising. I’d imagine that a very similar thing happens with simulated annealing, though you’d need a much larger system to test it out. It’s definitely good to have something formally shown about the quantum case, though, especially for a graph problem that you could experimentally test with the 128-qubit chip. 🙂

Neil,

Wouldn’t your procedure have the same effect on local minima as the global minima?

Also, here’s a naive question: for a very large problem instance (solved on a >>128-qubit chip for example), how does one have confidence that they’ve found a global minima?

Your correct; it would have exactly the same effect on local minima as global minima, and even on non-minima. However, if we’re only looking at the bottom few states, this makes it such that we have 2^n states at the bottom that give us the same answer. Although, it might just end up like the result shown in http://arxiv.org/abs/cond-mat/0703085 , in which case the extra states only decrease the probability of success, hehe (Figure 1 shows something like what I’m suggesting.)

For the second question, for some problems, you can easily check whether the answer you got is the right one, but for optimization-type problems, you can usually only get an idea of how good the answer is, not whether it’s the optimal one. However, because you can find a score for the answer, you can solve the same problem several times and pick the answer with the best score.

Supposing that you only have a 5% chance of getting a “good enough” answer (you could define “good enough”=”optimal”) and different runs of the problem are independent, if you run it 100 times, you’ve got a 99.4% chance that at least one of them was “good enough”, and you can find the best out of the 100 easily. (1 – (1-0.05)^100 = 0.994)

Neil,

“Supposing that you only have a 5% chance of getting a “good enough” answer (you could define “good enough”=”optimal”) and different runs of the problem are independent…”

The ‘independence’ part is what I’m stuck on… it’s unclear to me how to reformulate a given optimization problem so that you can, say, avoid falling into some broad local minima. For monte carlo MD simulations of protein folding, one would just have a large set of different initial backbone/AA configurations (and use coarse-graining/thermal-cycling/etc. tricks). I’m unsure how this sort of strategy would map to runs on an AQC.

“Your correct; it would have exactly the same effect on local minima as global minima, and even on non-minima. However, if we’re only looking at the bottom few states, this makes it such that we have 2^n states at the bottom that give us the same answer.”

Using the protein folding analogy again, you can get into trouble if:

(1) – You have minima that sit in deep wells on the energy landscape (potentially close to the global MFE). (2) – The wells for these minima are very broad AND/OR there are a large number of minima. This leads to “entropic clusters” that can frustrate folding both in silico and in vivo.

It seems like a crapshoot that adding extra states or conformational freedom/etc. would make the probability of finding the global MFE better. On the upside, I wonder if there’s a way to do this so that you can achieve statistical independence for different AQC runs?