Why do some people say a quantum computer has 4 bits, and others say they have hundreds? Why is it more complicated than a regular computer? As a scientist in this field myself, I can’t help but feel bad about the confusion surrounding this topic. People are really just curious about the potential of this cool sounding technology, and although news reports continue to promise the appearance of these machines very soon, the exact definition of a quantum computer still seems to be elusive.

I’m going to try and explain one of the main problems in the field of quantum computing today through the use of a visual analogy. The following explanation gets to the heart of a real disagreement, even between experts in the field. There are heated debates in the scientific community over this problem (yes, scientists get into dogfights too). So be aware that you’re getting some juicy inside knowledge here about what really stumps even expert scientists! Of course we don’t often want to shout about the fact that we get stumped by things… but it is true :) I should mention that this explanation is a very conceptual and abstract way to look at quantum computing so there won’t be any implementations or specific algorithms discussed. It compares the ‘conventional’ approach of quantum computing with a new approach, called ‘Natural Quantum Computing’ (NQC). The quantum computers built by D-Wave are of the NQC type.

.

**Building blocks – in theory and in practice**

Let’s begin with some background. When engineers and scientists build a new technology, there are two factors that come into play. There’s the theoretical model of the technology or system, and then there’s the practical implementation of that technology or system. These two companions are like yin and yang, both are necessary for the whole to function, yet they are forever competing with one another, two sides of the same coin.

In my analogy, my yin and yang are going to be represented by circles:

.

.

Which circle is better? You might argue that the perfect yin circle is mathematically more beautiful, but if it is never found in nature, then is it really of any use to us? The more natural yang circle has its own bumpy beauty in that it really represents the true nature of what we see around us. You just don’t see perfect circles in nature.

Now, what does this have to do with computing? Well, when we want to build a system, we build it out of bumpy circles (we have to, as they are the only things we can find around us). But we understand it – and predict how it will behave mathematically – by approximating it with perfect circles, because those are easier for us to calculate with. So, let’s pretend that our circles are now building blocks:

.

.

In most cases, this works pretty well. Our mathematical models now predict the behaviour of the circles pretty much the same way as they behave in nature (if you’d like to think about this system ‘behaving’ dynamically, imagine removing one of the circles on the lower layer and trying to predict where they would all end up). Our model of the circles would probably get it roughly right.

When we try to understand classical digital circuits, the same thing happens. We build a ‘model’ of a classical digital computer, like the picture above, and that model behaves in a similar way to its real-world counterpart.

.

**Building a ‘conventional’ Quantum Computer using circles**

However, (and here begins the juicy bit) – quantum computers don’t work like that. At least, not the ones that people have been trying to build up to now. I’m going to use the phrase ‘conventional QC’ to describe those, because there IS a different way of doing things which I’ll describe later. It turns out that in a conventional QC that slight difference between the yin and yang descriptions really does make ALL the difference.

What would a conventional quantum computer look like if we used the circle analogy to describe it? Well our mathematical yin model of it would probably look something like this:

What happens when you try and build this model using real, bumpy, yang circles? Well, you can try…

Oops! What happened? It didn’t work like we thought it would. The imperfections in the circles just made our delicate system come crashing down. Try again? The same thing happens. You can never get those circles to behave like their yin counterparts. The bumpiness just makes it impossible.

This is exactly what happens when we try to build a quantum computer. Our theories tell us that if we can just get 50, or 100, or maybe 500 of those darned circles to balance on top of one another then our technology will be able to fulfil our every dream! But we just can’t do it in real life – the thing keeps falling over! That is exactly what is meant by a system undergoing ‘decoherence’ if you are familiar with quantum computing parlance.

Well, something has to be done about this. We still want to be able to build a quantum computing system, right?

Yes – but there is an important problem to be addressed here. Who is at fault? Something didn’t add up, and blame must be assigned! Most people blame the yang engineers of the system for using bumpy circles: “Your circles are not perfect enough. Your system falls over. You’re not trying hard enough.”

So the diligent engineers try to find ways to make their circles more and more smooth. Little by little, they are able to balance one or two more on top of each other, for a little longer. But the circles can never be exactly perfect – so although that mathematically stable equilibrium of hundreds of balanced circles can be easily be modelled in theory, it is questionable as to whether it can it ever be built.

The interesting thing is that the question is almost never asked the other way round:

Are the yin circles perhaps too perfect?

In other words: Why do we create models of things that we then can’t ever practically make for real? All approximate models break down at some point, but the difference is usually small enough to still be able to inform our practical building of things. But in this case the model seems useless for anything we try to build that is more complex than a few building blocks!

.

**Natural Quantum Computation**

There is another way to build quantum computers that DO behave like their models. There is a type of quantum computing known as Natural Quantum Computing (NQC). This is a way of using quantum systems that we CAN build, in a way that is practical, and doesn’t go against what nature intended to happen to those circles.

A natural quantum computer system would be represented more like this:

Now the bumpiness doesn’t matter as much, because once again our model behaves a lot more like the real thing we are building. We have attacked the problem from both sides here – we have taken a reality check on our expectations of building with bumpy circles, and we have designed a mathematical model that respects that. We can still build up our quantum computer to be bigger and bigger using this method, and to do more and more, but it now captures the essence of how nature’s elements really do behave. Now this should answer the question about why different people have built quantum computers with different numbers of ‘qubits’ (quantum bits) – try and balance them using the conventional approach, and it depends how well you smooth out your bumpy circles. You might be able to get four, five, or even seven wobbling precariously on top of one another. People are working all over the world on trying to improve what nature has given to us, to get those circles smoother and smoother.

But using Natural QC and you can easily build systems with hundreds of bits. The system works differently, and it doesn’t fall over so much!

.

**So why isn’t everyone taking this Natural QC approach?**

Some people feel that it is better to just keep trying to make the circles ever smoother, because people are very familiar with the theoretical ‘yin model’ and have worked with this system mathematically for many years, developing algorithms to allow such a system (once built) to solve problems very quickly. But natural quantum computing is also very good at solving specific types of problems, like those in artificial intelligence and optimization. I myself feel that these NQC-suited problems are more interesting. Given that we also know that it is much easier to build these natural quantum computers, for me it’s really a no-brainer! But some people still like the idea of building the precariously-balancing conventional type of quantum computer. And it’s a matter of opinion about whether or not you believe that it is possible to make those circles smooth enough to get them to balance. But that’s half the fun in science – there are some questions that just haven’t been answered yet!

Even so, I think that we are sometimes a little too biased towards the yin theoretical description of a perfect system, hypnotized by the beauty and simplicity of our mathematical models. And we become disheartened and frustrated when real systems don’t behave like this. But this is not necessarily a problem with nature; it can also be thought of as a problem with our models! Sure, we can make nature behave more like our models by polishing those bumpy circles. But we can also make our models behave more like nature too. We can meet half way, and have the best of both worlds. And personally I’d rather build problem solving things (and models of them!) with natural bumpiness than spend my entire life trying to polish circles to an unachievable smoothness.

I find this blog most interesting and this particular post helped me understand the science behind the machine. Thank you for taking the time to compose such an extensive post.

Is there something I can read to help make this distinction more concrete? I understand the gap between scientific models and reality, I’d just like to have some more detail about how the D-Wave folks have engineered around that gap.

Hi Jesse,

Thanks for your comments! We will get around to explaining some of the details soon, this was just an introductory post for people interested in a very high level conceptual explanation of the NQC approach. Is there something you’d like to know in particular? I am very interested in finding out what people would like to see written about!

Suz G

I’d like to know what, specifically, where the limitations/hardships presented in implementing the scientific models of quantum computing, and how you were able to solve or circumvent those challenges. What hardware did you use, compared to alternative attempts to implement quantum computing, and what are the specific gains and drawbacks of your approach?

I’m fascinated by the D-Wave and would love to learn more about how it works. I’ve been interested in quantum computing since I took the QC course at Cal, in its inaugural semester.

Hi Jesse, here are five unresolved technical problems with the traditional approach to quantum computation. There are more but this is a good starting point. Note that the natural quantum computation approach we use overcomes all of these.

1. all energy states of a complex quantum system are treated as being equivalent. For a quantum circuit containing gates, the states those gates are acting on are all considered to be the same. To make this a bit more concrete, imagine a hydrogen atom, with all its allowed energy levels. In the traditional approach to QC, all those energy levels are treated as being the same, in the sense that when you act on the system you have to be able to create arbitrary superpositions of them — for example you have to be able to create a superposition of a highly excited state, where the electron is really far from the proton, and the ground state, where it’s not. These types of states, that require superposition of what are sometimes called pointer states (in the case of the hydrogen atom you can just think of these as energy eigenstates), are extremely fragile and decohere very quickly. Nature doesn’t like superpositions of pointer states. In Suz’ post, you can identify this “creation of unnatural superposition states” with trying to stack the perfect circles. This is often identified as the “decoherence problem” AKA “the reason QCs are hard to build”. It is certainly true that this is a huge problem for the traditional approach to QC, although it’s not the only one. Personally I don’t think the traditional approach will ever be implemented in hardware at scale, partly because of this issue, and partly because of the next few items on this list.

2. overhead of doing fault tolerant error correction. There is a theorem that states that if you can reduce the tendency of your physical hardware to want to revert to its pointer states (call these errors for short) past some point, you can actively correct those errors by using some protocol. In practice this requires significant hardware overhead, increasing the complexity of the hardware by a truly terrifying amount. For example, one of the error correction codes that makes sense for doing this requires 7^3 = 343 extremely good physical qubits to encode a single logical qubit. That’s bad enough, but the story is actually way worse than that. You also have to include a large number amount of circuitry to connect those qubits, get information in and out of all of those parts, and perform conditional actions based on the outcomes of measurements *in the circuitry on the chip on timescales less than around 1 nanosecond*. That’s for ONE logical qubit. Let’s say you wanted to do something useful in the traditional model of QC with this approach — say 100 logical qubits. Then you’re talking on the order to 35,000 physical qubits with all the active on chip control circuitry referred to above. Which leads me to the next point.

3. control circuitry. A traditional QC requires an enormous number of signals to run it. How are these signals passed to and from the processor? There are *zero* proposals in any hardware system that address this issue on the scale of the system in #2 (100 logical qubits). You can’t send one signal / input channel (say through a wirebond for solid state proposals) because there are way too many. How many? For most proposals the number scales like the number of physical qubits times some constant (like 3), and for some it scales like the square of the number of physical qubits. Even in the best case, our 100 logical qubit computer above requires on the order of 100,000 signals to do anything. It is not practical to wirebond 100,000 wires to a chip.

4. need for high frequency control signals. A traditional QC requires *high frequency* control signals, because these signals need to act on timescales shorter than the decoherence time, which is typically on the order of nanoseconds for solid state approaches. This is an extremely important and underappreciated point by many folks in QC. High frequency signals are much more difficult to implement than DC signals for a variety of reasons.

5. lack of industrial quality infrastructure to support projects. In order to build something of the scale described here in the traditional model you need advanced modern fabrication at the level of what is done in the semiconductor industry. This means that even if certain ideas seem appealing as a builder you don’t really have freedom to choose any system you like, if you really want to build something. In fact this is probably the starting point for any serious discussion of whether an approach makes practical sense — if you can’t build your design in a place that is used to dealing with VLSI stuff, as soon as you want to build something real you’ll be out of luck.

Isn’t the quantum threshold theorem, and fault tolerance in general, a major area of theoretical research in computing? Is NQC supposed to be outside of this context? And isn’t it easier to stack imperfect circles than perfect circles?

Hi R.B.,

Yes this is correct, there is no fault-tolerance threshold theorem for NQC. But I would also point out that no-one has yet demonstrated experimentally that it is possible to implement fault tolerance in any QC system, let alone a scalable one, so I’m personally not yet convinced that it is possible to do error correction for more than a handful of qubits.

The reason that a fault tolerance threshold is needed in gate model QC is that the computer must be designed to continually fight against the natural physics of the system. In NQC, the computer is designed to work with the natural physics of the system, so error correction isn’t such a big deal.

I would however absolutely love to see experimental evidence of fault tolerance being successfully implemented in gate model systems, as it would be a big step forward in QC!

I know that this is totally a can of worms, but I personally don’t believe we can say that we have a fault tolerance threshold for QC until it has been experimentally verified. That’s my experimentalist point of view, and I know many would argue with me over that :)

Ok, so D Wave purports to have made a QC by taking a more “practical” approach. Fair enough. But at the end of the day, results are important and on a “practical level”, really defines the level of success.

*** How does the NQC compare with a conventional computer in a computing task? Some comparisons would be greatly shed light on the current state of QC. Granted, it may merely be an early product which, in theory, should improve over time. But even in an infant state, one would expect some basic computing ability that blows out the conventional computer by an order of magnitude.”

Results please?

The expectation that a real quantum processor should, “even in an infant state”, “blow out” 60 years of progress in conventional hardware and algorithms is misplaced. The advantages arising from using quantum effects for computation are subtle and will take some time to figure out how best to use them.

You can check out https://dwave.files.wordpress.com/2010/12/weightedmaxsat_v2.pdf, in particular the summarized results in Table 2, for performance of structured classification tasks run in hardware. Generally what we see are improvements in classification accuracy of 1-10% over a series of independent SVMs for structured classification.

Remember that at first the Wright Flyer only flew 120ft and lasted for only 12 seconds. Compare that with traveling by horse upon which a rider could cover miles and last for hours. Was motorized flight only impressive when it could outdistance and outlast a horse by an order of magnitude?

I think that’s a very good analogy to where we’re at right now! Thanks for that.

Yes, I completely understand your analogy. Everything new and cool, at the point of concept proof, is exciting albeit it may still in a crude form.

Granted, the first flight was historical and impressive. But no one would have paid the equivalent of $10 million (per 1903 dollars) for a flight that lasted even one mile back then.

Lockheed paid $10 million for the DWave computer. Granted it is a first generation/iteration product, expected to improve over time. But for $10 million, what does/can it do today?

To keep this post brief:

1) Dwave claims to have developed a QC in a “practical and feasible manner”.

2) Dwave is trying to convince the scientific community (SC) in the merits of their QC. And with as much enthusiasm, the SC is examining Dwave’s evidence.

3) But no one seems to be asking, at this point, for $10 million, what can Dwave’s QC do? Is it more of a science project to let Lockheed be more prepared (than its competitors) when the next generation QC comes out? Granted the work by Lockheed may well be classified and Dwave cannot speak on the details. But some high level illumination on the practical abilities of the QC would be interesting.

4) Curious, what happened to the image recognition project DWave was conducting with Google?

Incidentally, for anyone who thinks it’s interesting: The first paying airplane passenger seems to have been Abram Pheil, in 1908, who paid $400 — approximately $10,000 in today’s dollars — for the privilege. Apparently the $400 figure was the result of brisk bidding among the prospective passengers. Pheil’s 20-mile flight lasted 23 minutes.

In 1909, the federal government bought its first airplane for $30,000 — approximately $770,000 in today’s dollars.

The mathematician also tends to idealize any situation with which he is confronted. His gases are “ideal”, his conductors “perfect”, his surfaces “smooth”. He calls this “getting down to the essentials”. The engineer is likely to dub it “ignoring the facts”

– Thorton C. Fry, Founding Director of the Mathematics Department at Bell Labs

seems like your bump-resistant analogy has an exponential cost factor…

Hey hackers,

I love your blog, thanks for spending so much time explaining this stuff in a way a simple programmer can understand!

I saw that HackTheMultiverse.com wasn’t hijacked yet, so I bought it and pointed it at your dwave.wordpress.com blog. It’s just a redirect for now, but wordpress.com lets you point the domain name directly to your site (they charge like a dollar a month for it; if you migrate to google blogger, it’s free). So if you want it, I’ll assign the domain ownership to you. I live too far away to take you guys out for lunch, so I’ll give you a domain!

— Robert

That is totally awesome, thanks!!!!

Do areas of “perfect circles” and “non-perfect ones” match? :)

Thanks for this post!

No matter if some one searches for his necessary thing, so

he/she wants to be available that in detail, therefore that thing

is maintained over here.

Pingback: What’s in a Name | Observations on Quantum Computing & Physics