The long answer is that there are many different entanglement monotones for many-qubit systems but that doesn’t really help much. It doesn’t help much because they all have different operational interpretations. There’s a lot of great work out there on characterizations of multi-qubit states, that is, there are a lot of papers that seek to put a number to the structural properties of a quantum state – but as far as can be seen at the moment there is no magic function that perfectly describes the “entanglement” in a given quantum state. It isn’t even that clear as to whether one can find such a function.

Is there a sense in which the statement “quantum computers obtain their scaling advantages from entanglement” can be quantified using known entanglement measures, even given the different operational interpretations?

For example could the scaling of a quantum algorithm be ascertained given the starting point that 2-qubit entanglement is allowed, but the restriction that for some/all of the k>2 entanglement monotones the entanglement is zero?

To my opinion it’s not possible to well caracterise the entanglement for a many-qubit system with a single number. The reason is that with 3 qubits or more there are inequivalents entangled states (ie: its not possible to pass from one state to another by locals operations and classical communication with a reasonnable probability =SLOCC). It is the case for the GHZ and W 3-qubits states (arXiv:quant-ph/0005115).
The difference between those states is that for the GHZ state the entanglement is maximum between the three qubits, while in W, the entanglement is much between each pairs of qubits. For this reason W’s entanglement is more resistant for a particule loss. So we can’t say W is more (or less) entangled than GHZ, they are differently entangled.

There also exists severals invariants for trying to determine if a state is entangled or not (separibility criterion). The best is the partial positive transpose criterion (PPT) and it’s generalisations. For an effictive way to compute this criterion: Ditinguishing separable and entangled states arXiv:quant-ph/0112007.

To attempt to answer your question:”could the scaling of a quantum algorithm be ascertained given the starting point that 2-qubit entanglement is allowed, but the restriction that for some/all of the k>2 entanglement monotones the entanglement is zero?”, consider what this would mean.

First off, at most we could have only entangled pairs, and single qubits, with no entangling operations allowed. This would restrict the algorithm to a tensor product of small Hilbert spaces, which would in turn make it efficiently simulable.

This means that if all other (possible) entanglement monotones, for k>2 are zero, then the system is little better than a classical computer.

It’s also probably worth pointing out that you can use concurrence to measure entanglement between any two qubits in the system, since it is defined for mixed states. Obviously, this doesn’t completely characterize the entanglement, but it should be a useful tool for detecting some, if any really exists in the system.

You can pick states that circumvent this, but it seems unlikely that at all points in the course of the algorithm you would remain in this special subspace.

I’ve always found this subject very confusing. I have two questions:

Call E(A,B) some measure of entanglement between the states of qubits A and B. If E(A,B) >0 and E(B,C) > 0 is it possible that for some 3-qubit entanglement measures E'(A,B,C) = 0? If this is the case does it still mean you can restrict to tensor products of small Hilbert spaces, ie. is the fact that E'(A,B,C)=0 a definition that means the Hilbert space can be chopped up, or are they potentially separate issues?

Just because something can be efficiently simulated doesn’t imply that there isn’t a polynomial speed-up lurking somewhere. Where all the fun lies for me is the “then the system is little better than a classical computer” part. Is it? Is there some result that shows (maybe for some toy problem) that you could get even a small scaling advantage over a classical computer with very restricted access to entangled states?

Also the very much related issue of mixed states and concurrence: I think that concurrence isn’t a good way to understand AQC. The reason is that near anticrossings if the system can thermalize (which for hard problems is likely) you get a 50-50 mixture of two eigenstates, both of which are entangled. So if you happen to be in the groundstate you are definitely using some sort of entanglement to aid computation (because the state trajectory travels through entangled states), but the concurrence tells you that the entanglement of this type of mixed state is zero.

I guess the point is that (at least for AQC) having your eigenstates be entangled is the important point–not the value of a function defined over a mixed state.

Let me try to answer you second paragraph first. Having a tensor product structure to the state vector in this case does mean that there is no polynomial speed-up lurking anywhere. Basically if you do have only bipartite entanglement, you essentially have an array of 4 dimensional qudits. Each qudit can be simulated in constant time, and the time grows linearly with the number of qudits in the system. This means a N qudit (2N qubit) system can be simulated in O(N) if it is restricted to bipartite entanglement. Thus you may have some constant speedup, but nothing that scales with the system size. A Groveresque speedup over classical computers is absolutely impossible under these circumstances. For a polynomial speed-up you need to move away from the tensor product structure.

As for the first paragraph, you could certainly think up a measure which would fail to detect some specific type of entanglement, but that seems very artificial (and why would you use such a measure in this instance anyway?). The question about E’(A,B,C)=0 will largely rely on the size of the rest of the Hilbert space, and the other types of entanglement present, since the could be, for example, 4-party entanglement, etc.

I only suggested concurrence as a tool to try, not the be all and end all of entanglement measures. Having a 50-50 mixture of eigenstates does not necessarily mean that the concurrence would necessarily be zero, as you can certainly construct states for which this is not the case. I thought it might be useful, as a positive result could be used to put a lower bound on the entanglement present in the system. A failure would not rule out tripartite entanglement, etc.

Joe: Yes I understand your point re. simulating systems that have tensor product structure. I guess where my confusion still remains is how multi-qubit entanglement measures are related to what this structure looks like.

My lack of understanding is related to this question: If E(A,B)>0 and E(B,C)>0 and E(A,C)>0 does this automatically imply that the 3-qubit system lives in the full 2^3 D Hilbert space or not? Note that I haven’t explicitly said anything about 3-qubit entanglement, just a set of statements about 2-qubit entanglement.

If E(A,B)>0 and E(B,C)>0 and E(A,C)>0 this does not automatically imply the full Hilbert space is used. If only bipartite entanglement exists, then certain states are forbiden.

One way to think of this is to count the number of free parameters. In order to fully describe an N-qubit state with no entanglement I need 2N parameters. To describe a state with only bipartite entanglement, then I need N(N-1) additional parameters. For entanglement between m qubits there are 2(N!/(m!(N-m!))) free parameters.

This means that there is only an exponential number of parameters if all types of entanglement are allowed. If we cap the entanglement at m-partite entanglement, then there are only O(N^m) free parameters. This means that you are essentially working in a polynomial sized parameter space, and the system can then be efficiently simulated (for N that is, since it is exponentially hard in m).

Restricting to m=2 means that you should have O(N^2) parameters. If, however we impose some kind of locality constraint, such as that only neighbours in some network can be entangled, then this reduces to O(N) and gives you no possibility of a polynomial speed up over classical computing.

Pingback: Article Feed » Question for quantum information experts

The short answer is no.

The long answer is that there are many different entanglement monotones for many-qubit systems but that doesn’t really help much. It doesn’t help much because they all have different operational interpretations. There’s a lot of great work out there on characterizations of multi-qubit states, that is, there are a lot of papers that seek to put a number to the structural properties of a quantum state – but as far as can be seen at the moment there is no magic function that perfectly describes the “entanglement” in a given quantum state. It isn’t even that clear as to whether one can find such a function.

Mick: Thanks for the reply!

Is there a sense in which the statement “quantum computers obtain their scaling advantages from entanglement” can be quantified using known entanglement measures, even given the different operational interpretations?

For example could the scaling of a quantum algorithm be ascertained given the starting point that 2-qubit entanglement is allowed, but the restriction that for some/all of the k>2 entanglement monotones the entanglement is zero?

I agree with mick.

To my opinion it’s not possible to well caracterise the entanglement for a many-qubit system with a single number. The reason is that with 3 qubits or more there are inequivalents entangled states (ie: its not possible to pass from one state to another by locals operations and classical communication with a reasonnable probability =SLOCC). It is the case for the GHZ and W 3-qubits states (arXiv:quant-ph/0005115).

The difference between those states is that for the GHZ state the entanglement is maximum between the three qubits, while in W, the entanglement is much between each pairs of qubits. For this reason W’s entanglement is more resistant for a particule loss. So we can’t say W is more (or less) entangled than GHZ, they are differently entangled.

There also exists severals invariants for trying to determine if a state is entangled or not (separibility criterion). The best is the partial positive transpose criterion (PPT) and it’s generalisations. For an effictive way to compute this criterion: Ditinguishing separable and entangled states arXiv:quant-ph/0112007.

Thanks renan! I’ll take a look at quant-ph/0112007.

Hi Geordie,

To attempt to answer your question:”could the scaling of a quantum algorithm be ascertained given the starting point that 2-qubit entanglement is allowed, but the restriction that for some/all of the k>2 entanglement monotones the entanglement is zero?”, consider what this would mean.

First off, at most we could have only entangled pairs, and single qubits, with no entangling operations allowed. This would restrict the algorithm to a tensor product of small Hilbert spaces, which would in turn make it efficiently simulable.

This means that if all other (possible) entanglement monotones, for k>2 are zero, then the system is little better than a classical computer.

It’s also probably worth pointing out that you can use concurrence to measure entanglement between any two qubits in the system, since it is defined for mixed states. Obviously, this doesn’t completely characterize the entanglement, but it should be a useful tool for detecting some, if any really exists in the system.

You can pick states that circumvent this, but it seems unlikely that at all points in the course of the algorithm you would remain in this special subspace.

Hi Joe,

I’ve always found this subject very confusing. I have two questions:

Call E(A,B) some measure of entanglement between the states of qubits A and B. If E(A,B) >0 and E(B,C) > 0 is it possible that for some 3-qubit entanglement measures E'(A,B,C) = 0? If this is the case does it still mean you can restrict to tensor products of small Hilbert spaces, ie. is the fact that E'(A,B,C)=0 a definition that means the Hilbert space can be chopped up, or are they potentially separate issues?

Just because something can be efficiently simulated doesn’t imply that there isn’t a polynomial speed-up lurking somewhere. Where all the fun lies for me is the “then the system is little better than a classical computer” part. Is it? Is there some result that shows (maybe for some toy problem) that you could get even a small scaling advantage over a classical computer with very restricted access to entangled states?

Also the very much related issue of mixed states and concurrence: I think that concurrence isn’t a good way to understand AQC. The reason is that near anticrossings if the system can thermalize (which for hard problems is likely) you get a 50-50 mixture of two eigenstates, both of which are entangled. So if you happen to be in the groundstate you are definitely using some sort of entanglement to aid computation (because the state trajectory travels through entangled states), but the concurrence tells you that the entanglement of this type of mixed state is zero.

I guess the point is that (at least for AQC) having your eigenstates be entangled is the important point–not the value of a function defined over a mixed state.

Hi Geordie,

Let me try to answer you second paragraph first. Having a tensor product structure to the state vector in this case does mean that there is no polynomial speed-up lurking anywhere. Basically if you do have only bipartite entanglement, you essentially have an array of 4 dimensional qudits. Each qudit can be simulated in constant time, and the time grows linearly with the number of qudits in the system. This means a N qudit (2N qubit) system can be simulated in O(N) if it is restricted to bipartite entanglement. Thus you may have some constant speedup, but nothing that scales with the system size. A Groveresque speedup over classical computers is absolutely impossible under these circumstances. For a polynomial speed-up you need to move away from the tensor product structure.

As for the first paragraph, you could certainly think up a measure which would fail to detect some specific type of entanglement, but that seems very artificial (and why would you use such a measure in this instance anyway?). The question about E’(A,B,C)=0 will largely rely on the size of the rest of the Hilbert space, and the other types of entanglement present, since the could be, for example, 4-party entanglement, etc.

I only suggested concurrence as a tool to try, not the be all and end all of entanglement measures. Having a 50-50 mixture of eigenstates does not necessarily mean that the concurrence would necessarily be zero, as you can certainly construct states for which this is not the case. I thought it might be useful, as a positive result could be used to put a lower bound on the entanglement present in the system. A failure would not rule out tripartite entanglement, etc.

Joe: Yes I understand your point re. simulating systems that have tensor product structure. I guess where my confusion still remains is how multi-qubit entanglement measures are related to what this structure looks like.

My lack of understanding is related to this question: If E(A,B)>0 and E(B,C)>0 and E(A,C)>0 does this automatically imply that the 3-qubit system lives in the full 2^3 D Hilbert space or not? Note that I haven’t explicitly said anything about 3-qubit entanglement, just a set of statements about 2-qubit entanglement.

?

Hi Geordie,

If E(A,B)>0 and E(B,C)>0 and E(A,C)>0 this does not automatically imply the full Hilbert space is used. If only bipartite entanglement exists, then certain states are forbiden.

One way to think of this is to count the number of free parameters. In order to fully describe an N-qubit state with no entanglement I need 2N parameters. To describe a state with only bipartite entanglement, then I need N(N-1) additional parameters. For entanglement between m qubits there are 2(N!/(m!(N-m!))) free parameters.

This means that there is only an exponential number of parameters if all types of entanglement are allowed. If we cap the entanglement at m-partite entanglement, then there are only O(N^m) free parameters. This means that you are essentially working in a polynomial sized parameter space, and the system can then be efficiently simulated (for N that is, since it is exponentially hard in m).

Restricting to m=2 means that you should have O(N^2) parameters. If, however we impose some kind of locality constraint, such as that only neighbours in some network can be entangled, then this reduces to O(N) and gives you no possibility of a polynomial speed up over classical computing.

Hope this answers your question,

Joe