A post about QC

A previous commenter has shamed me into posting more about QC. So here is a post about QC.

In the TAQC computational model, a quantum computation provides the user with an answer that is exact if the computer is in its lowest energy state at the end of the calculation. If the computer is not in the lowest energy state, the result we get is an approximate answer. The approximation gets worse the higher the energy is.

For this model of quantum computation, errors tend to cause the resulting answer we get to drift away from the exact answer. These errors, which drive the computer into higher energy states, are naturally counteracted by the computer system’s tendency to reduce its energy.

This tendency to reduce energy includes dissipative processes–ones that release energy. These types of process, helpful for TAQC in that they improve the approximation to exact that the computer provides, are anathema to the gate model of quantum computation.

In the gate model, there does not exist the same idea of approximate answers ranked by the energies of the states encoded by a processor. The exact solution to a problem could be any state, even the highest energy state. Errors in this model have to be removed because of this lack of a concept of approximate solution. Any change to the desired application of gates brings the computer into a state which could be very far from the desired state.

This unfortunate situation, that the gate model requires error correction in order to work at all, has led to a commonly repeated and very misleading QC dogma. Here it is in a recent preprint from Laforest et.al. quant-ph:0610038:

QEC [quantum error correction] is a requirement for scalable quantum information processing (QIP)

While technically this is true, what the authors are implying is that active quantum error correction, which is a complex and ridiculous process that will probably never be used outside a research lab ever, is required for quantum computation. It isn’t, which is a good thing. In the TAQC model, quantum error correction occurs passively; nature itself shepherds the system towards better and better approximate answers.

I would even go as far as to bet that no QC that requires active error correction will EVER perform a computation beyond the scope of conventional systems; having architectures and systems that are passively protected against errors is Rule #1 for anyone serious about building large-scale quantum computers.

7 thoughts on “A post about QC

  1. This post sums it up nicely, Geordie. I look forward to a sea of cubits doing useful work; my kudos.

    … now if I can just finish this isomorphic translation of discrete logarithm to maximum clique. (when do you expect > 100k qubits?🙂

  2. Fluctuations (=>errors) cause a drift of TAQC away from the lowest energy level. But according to you, Dissipation causes TAQC to converge to the lowest energy level. But if the fluctuation dissipation theorem applies, then fluctuation and dissipation are proportional to each other. If, according to you, one is causing convergence and the other is causing divergence of TAQC, what happens?

  3. All of the important dynamics occur at energy level anticrossings in TAQC (and AQC). At these anticrossings there are two effects which dominate.

    One is Landau-Zener (LZ) transitions in an isolated quantum system, which are the ones AQC concerns itself with. If the system’s externally adjusted parameters are changed slowly enough, the adiabatic theorem suggests that the system always remains in its ground state (no LZ transitions). As the sweep speed is increased the chance of making LZ transitions into “incorrect” states increases exponentially.

    The other effect requires the explicit introduction of an environment. Inclusion of environments can qualitatively change the physics of the system. In the paper I referenced in the post, we treat the case of a thermal ohmic oscillator bath. In the presence of this type of environment, a competing effect to that of LZ transition arises, which can be thought of as thermalization of the system near the anticrossing. This thermalization effect leads to a mixture of the two energy eigenstates involved in the anticrossing.

    The timescale for thermalization is much faster than the adiabatic timescale. This means that we can sweep much faster than allowed by AQC as long as it’s OK to only be in the ground state 1/2 of the time (for one anticrossing)–formally in the adiabatic grover search example treated in the paper, the sweep time is reduced from 1/g^2 to 1/g where g is the minimum gap at the anticrossing by the presence of the environment.

  4. @VERNONA: I can see that you get it…we managed to map factoring onto clique! In terms of numbers of qubits, I think we can get to somewhere around a million basically with the approach we’re taking now. It will take some time but there don’t seem to be any fundamental “in principle” type hurdles.

    Also remember that we don’t have to put the whole problem in hardware; we have a bunch of tricks to decompose big problems into bite-size chunks for whatever hardware we’ve got at the time (basically trading time for memory is one way to think of it).

    So if you’ve got a billion variable discrete log–>max clique we might be able to handle it at some point.

  5. Suppose E0, E1 are the two lowest energy levels. It seems to me that once the system thermalizes at a crossing, it goes from a pure state to a mixture. After the crossing, it’s in a mixture like 1/2( |E0>

  6. Suppose E0, E1 are the two lowest energy levels. It seems to me that once the system thermalizes at a crossing, it goes from a pure state to a mixture. After the crossing, it’s in a mixture like 1/2( |E0)(E0| + |E1)(E1| ). When you measure the energy of this mixture, you get (E0 + E1)/2 all the time. So you’ve lost precision there. Am I right? I’m no expert. I’m just trying to understand.

    It also seems to me that with the million qubits that you envision, the energy level structure will be much more complicated (with numerous degeneracies and near degeneracies) than your simple picture of two isolated levels.

  7. You are right that after the anticrossing you get the mixed state. Remember that because the Hamiltonian is explicitly time-dependent the states are also time-dependent.

    When you measure the state after the full evolution is complete in your example, you measure the state |E1> 1/2 the time and |E0> 1/2 the time (with the caveat that the states are time-dependent–think of the two states as branches, one branch right and the other wrong). These states are by construction diagonal when we measure them–just think of them as bit strings. For example, |E0> might be |0000> and |E1> might be |1111>. Each of these is a possible solution to a problem. We get each 1/2 of the time. One is right, the other wrong. We can check a possible solution by calculating the energy of a given bit string and rejecting/accepting it based on some user threshold.

    Hope this helps.

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s