Decoherence: myths and realities

Anyone who has heard of quantum computation has probably also heard of decoherence as its worst enemy. While this is true, there are a lot of statements made about decoherence which are not true. For example, we often hear: decoherence makes a quantum system classical, or superposition and entanglement are destroyed after the decoherence time, or any quantum computation should be performed within the decoherence time.

These statements are incorrect. They may be correct under some circumstances, but they are certainly not general.

What is decoherence?

To begin with, let’s be clear what we really mean by decoherence. What makes quantum mechanics so distinct from classical physics is a phenomenon called superposition. Let us take two quantum states |0\rangle and |1\rangle to represent logical states 0 and 1 of a quantum bit (qubit), respectively. Quantum physics allows having a state like a|0\rangle+b|1\rangle (a,b being complex numbers), called a superposition state. If this state is measured, the outcome will be either 0 or 1, with probabilities |a|^2 and |b|^2, respectively. But this is not the same as having a probabilistic mixture of the two states. Before the measurement, the qubit would behave as if it is in both states at the same time, which is not possible classically. Indeed, the power of quantum computation comes from such a classically impossible effect, i.e., superposition.

Complex numbers a and b have magnitudes and phases. The magnitudes are related to the probabilities, as mentioned above. The phases, on the other hand, become important when there is an interference. As a simple example, consider a spin-1/2 particle with |0\rangle and |1\rangle representing its spin-up and spin-down states in z-direction. Suppose the spin is prepared in state |\psi\rangle = (|0\rangle+e^{i\theta}|1\rangle)/\sqrt{2}. This means with probability 1/2, the spin could be in either of “spin-up” or “spin-down” states. Classically, if we mix “up” and “down” states in z-direction with equal probability, and then measure the spin in the x-direction, we should always get S_x=0. Quantum mechanically, however,

S_x = \langle \psi|\sigma_x|\psi \rangle = \cos \theta

where \sigma_x is a Pauli matrix. (Here we have dropped the coefficient \hbar/2 for simplicity.) Suppose the particle is prepared in |\psi\rangle state, and after a delay time t, its spin is measured in the x-direction. The outcome of this measurement is either +1 or -1. If such a process is repeated many many times, the above equation predicts on average a S_x \ne 1, as long as \theta \ne (2n{+}1)\pi/2. Such an outcome, which is not possible classically, is a result of the superposition effect.

Now suppose the environment adds a random amount between 0 and 2\pi to \theta, each time this process is repeated. After averaging over \theta, we find S_x=\overline{\cos \theta}=0, which is the same as the classical outcome. Randomization of the phase, therefore, washes out the effect of superposition. Such an effect is called decoherence. Of course, if one immediately measures the state after preparation, before the environment has the chance to act on it, the outcome would be S_x \ne 0. The amount of randomness of \theta, therefore, depends on the time between the preparation and measurement. The time after which \theta is completely random is called decoherence time. After the decoherence time, the superposition is destroyed and the system will be in a mixed state. (In a more rigorous treatment, the environment is treated quantum mechanically and the decay time of the off-diagonal elements of the reduced density matrix gives the decoherence time.)

So far our argument seems to support what we were trying to disprove. But there is more to it. Quantum states are mathematically considered to be vectors in a Hilbert space. To represent a vector, one needs to choose a set of orthogonal basis vectors. But such a choice is arbitrary, as long as it is a complete set. This is like choosing any three arbitrary orthogonal directions in an Euclidean space to be x, y, and z axes. A superposition state in one basis may not be a superposition in another basis. For example, a|0\rangle+b|1\rangle is a superposition in the computation basis (made of |0\rangle & |1\rangle), but it is not a superposition in a basis made of orthogonal states |x\rangle = a|0\rangle+b|1\rangle and |y\rangle = b^*|0\rangle-a^*|1\rangle. This makes the above definition of decoherence kind of confused, because it is clearly basis dependent. In other words, if decoherence destroys superposition in one basis it may not destroy it in another basis. For example, suppose the environment destroys superposition only in the above xy basis. Any superposition state like \alpha |x\rangle + \beta |y\rangle, therefore, will be destroyed after the decoherence time, thus, the system will end up in a classical mixture of these states. State |x\rangle itself is not a superposition in this basis, therefore, will not be affected by such a decoherence. It is, however, a superposition in the computation basis by definition. Such a superposition, therefore, will be unaffected by this decoherence effect.

It is therefore not correct to assume all superpositions are destroyed after the decoherence time. One needs to clearly specify in what basis the environment acts and what is its influence in other bases. We usually care about superposition in the computation basis, where useful interferences happen. Below, a few simple examples with more details are provided. Some understanding of density matrix theory is required to follow the details.

Single qubit example

Consider a qubit with Hamiltonian

H = - {1\over 2}\Delta \sigma_x + H_{env}

where \sigma_{x,z} are Pauli matrices and H_{env} is coupling to the environment. States |0\rangle and |1\rangle are taken to be eigenfunctions of \sigma_z, and \Delta is the tunneling amplitude between them. The Hamiltonian has two eigenstates |\pm\rangle = (|0\rangle{\pm}|1\rangle)/\sqrt{2}, with eigenvalues E_\pm = \mp \Delta/2, respectively.

When coupling to the environment is weak, two things will happen. First, the environment will randomize the relative phase between |+\rangle and |-\rangle states in the energy basis. This is the decoherence effect that we discussed earlier. In addition to that, the environment will also cause thermal transitions between these energy levels. This process, called relaxation, moves the system towards thermal equilibrium with the environment. After some time, the system will be in the classical mixture:

\rho = P_+|+\rangle\langle+| + P_-|-\rangle\langle-|, \label{rho}

where \rho is the reduced density matrix of the system and P_\pm are the equilibrium probabilities given by Boltzmann distribution. Clearly \rho doesn’t have an off-diagonal element in the energy basis, therefore there is no superposition, as expected. However, it has an off-diagonal element in the computation basis

\langle 0|\rho|1\rangle = (P_+-P_-)/2

Therefore, there is a residual superposition in the computation basis, even in equilibrium, i.e., a long time past the decoherence time. In other words, decoherence does not destroy the superposition in the computation basis. The only way this superposition is destroyed is to have P_+=P_-=1/2, which is achieved at very high temperatures, T\gg \Delta. This agrees with the general expectation that quantum systems behave classically at high T.

The above argument, however, is valid only in the weak coupling limit. When the coupling to the environment is so strong that decoherence rate \gamma becomes larger than the tunneling amplitude \Delta, the qubit will lose its phase coherence before it can tunnel. This means coherent tunneling and therefore superposition is impossible even at T=0. The qubit, however, will still tunnel between the two logical states, but incoherently. In the limit of \gamma \gg \Delta, the incoherent tunneling rate is \Gamma \approx \Delta^2/\gamma. A large decoherence rate, therefore, results in a small incoherent tunneling rate, which means the system gets stuck in the classical (logical) states, as expected.

Two qubit example

One can easily generalize the above argument to more than one qubit. For example, consider two qubits strongly coupled to each other ferromagnetically. In that case, the two lowest energy levels will approximately be entangled states |\pm\rangle = (|00\rangle\pm|11 \rangle)/\sqrt{2}. One may ask, is it possible for the qubits to stay entangled after the decoherence time? In equilibrium, the density matrix will approximately be

\rho = P_+|+\rangle\langle+| + P_-|-\rangle\langle-|, \label{rho}

This assumes that the two upper excited states are far away and therefore not thermally occupied. One can now calculate Wootters concurrence (PRL 80, 2245 (1998)) as a measure of entanglement between the qubits in equilibrium:

C(\rho) = P_+-P_-

Clearly, C(\rho) is only zero when P_+=P_-. Therefore entanglement can exist in equilibrium, unless P_+=P_-, which can happen at large T. Again, this is a weak coupling argument. In strong coupling limit the qubits will not be entangled even at T=0 by the same argument as mentioned above.

Summary

Decoherence is a process through which quantum superposition in a system is washed out due to coupling to an environment. It is a basis-dependent phenomenon, therefore, decoherence in one basis does not necessarily destroy superposition in another basis. In the weak coupling limit, when the Hamiltonian of the system is dominant and the environment is a perturbation, decoherence happens in the energy basis. In the strong coupling limit decoherence may destroy superposition in other bases.

Decoherence can remove superposition in the computation bases under two conditions:

1) If temperature is much larger than the energy gap.
2) If decoherence rate is much larger than the tunneling amplitude.

In the absence of these two conditions, the system can be in superposition or entangled state even in equilibrium. Clearly, such a quantum system does not become classical after the decoherence time.