How do you measure the speed of an adiabatic quantum computer?

We’re going to have something new today at rose.blog — our first ever guest post, by Dr. Suzanne Gildert, who is the author of the wonderful Physics and Cake blog. This post is a high level overview of the Investigating the Performance of an Adiabatic Quantum Optimization Processor paper. Suz is now an official contributor on rose.blog, so expect more insightful commentary here than usual from now on!

How do you measure the speed of an adiabatic quantum computer?

This is a tricky question to answer, but this paper gives  it a shot. The paper focuses on a special kind of quantum algorithm, known as Adiabatic Quantum Optimization (AQO), which runs on a microprocessor built by D-Wave, and I’m going to try and give a high level overview of the main messages in this paper. The microprocessor itself has been built and tested using a superconducting electronics technology, but this paper concentrates on SIMULATING what it is expected to do. I’ll explain why, and what the results tell us about the promise of these processors.

Introduction

AQO processors behave more like analog computers than digital ones, so it is particularly tricky to measure their ‘clock speed’ in order to compare it to other processors. Of course, we’re forgetting the fact that nowadays it is even difficult to measure the speed of processors in our home PC! This is due to the fact that they are getting more and more ‘cores’, and are also now being designed with a range of different tricks to make them speedier, which may have nothing to do with the number that you read in ‘GHz’. For example, comparing an NVIDIA Tesla processor, with 240 cores each operating at about 1.5GHz, to a fast Intel processor (say the six core core i7-980X EE) is tricky, as they are good at different tasks. Bearing this in mind will help us in understanding what it is that AQO processors actually do faster than regular processors, and why it is difficult to put numbers on this. Processors with different numbers of cores and different architectures are better at some tasks than others. Graphics processors are designed to be exceedingly good at 3D rendering calculations. AQO processors have a very unusual architecture, and are exceedingly good at their specialty task, which is solving NP-Hard optimization problems. What, you say?! OK, I’ll take a minor diversion to explain.

What is an NP-Hard optimization problem?

Even though computers can solve many problems much faster than humans, there are certain problems that even the fastest supercomputers can have trouble with. A well known example is the traveling salesman problem (TSP). This goes as follows: A salesman is trying to find the best route between cities, where he wants to sell Quantum Computers. He only needs to visit each city once, as it is unlikely that people will change their mind about whether or not to buy one within his time-frame. He wants to visit as many cities as possible, as quickly as possible, and therefore in the shortest distance possible. (In addition to maximising his chances of selling quantum computers, he is also of course striving to minimise his carbon footprint). So he is trying to calculate the best journey route to take. In the first instance, he considers just his nearest 4 cities. There are 24 different routes he could take to visit them all. It is very easy to work out the corresponding distance for each potential route, and then choose the best one. But when he decides to take the business worldwide, he suddenly hits a problem! The number of cities is now in the 100s, and even though he uses the fastest possible supercomputer he can find, he still can’t be assured of achieving the minimum distance traveled:
.

.
There are many, many problems like this in the real world. They are ubiquitous. When you try to optimize some aspect of a small number of items, your method works perfectly. But when you increase it to a few more, it suddenly becomes intractable because of the exponentially growing number of possible combinations. A nice example of this is the Eternity II puzzle, where the number of possible ways of arranging the pieces is greater than the number of atoms in the universe. Many people purchase this puzzle in the hope of ‘getting lucky’ and winning the $2M prize, unaware of the NP-Hard Behemoth that faces them.

Luckily for our salesman, he has an AQO processor, which he can use to help solve the problem.

Physics inspired processing

To explain how an AQO processor solves problems such as this, I’m going to take a second minor diversion into Physics. I’ll introduce an analogy, which I will call The Water-Machine. Imagine having a volume of water and a series of different pipes that you can fit together in different ways, like this:
.

.
Many of the pipes have restrictions, which all open and close in different ways as time advances forwards. An example is shown in the steps 1 -> 2 -> 3 -> 4.

Now imagine that you had to work out the flow rate from the end of the final pipe as you ran the machine. You could calculate the whole thing using the laws of fluid dynamics, using your desktop computer, but it would be quite a difficult task. You’d have to program in all the changing restrictions and how these would affect the flow rate at each time step…… Aha! You think – why don’t I just build the Water-Machine instead? It would be quite straightforward! So you build the Water-Machine, with a cunning series of cogs and levers to open and close the restrictions in whatever way you want as you turn a single crank. You then very carefully measure the flow rate as you turn the crank. The answer pops out in a few seconds, as opposed to your simulation, which was taking a very long time to run on your desktop machine! You’ve just built what is known as an analog computer. This specialized machine is exceedingly good at calculating flow rates from a series of particular time-dependent restrictions. However, it wouldn’t be very good at multiplying numbers together!

D-Wave’s AQO processors are effectively Water-Machines for these hard traveling-salesman-type problems. You dial in the restrictions (also known as variables) which correspond to the city distances, turn the crank, and the solution pops out. AQO processors aren’t quite as specialized as the Water-Machine, because as I mentioned earlier, there are MANY MANY problems which turn out to be NP-Hard optimization types, so there are lots of things which can be adapted for AQO processors to solve even if such processors aren’t very good at, say, multiplying numbers.

The magic arises because, like the Water-Machine, the AQO naturally tries to solve a physics problem at its very core (no pun intended). The optimization problem is how to arrange a series of spins, known as an Ising problem. You can read a bit about Ising problems here and here. It turns out that a very natural thing for physical systems to do is to try and minimise their energy (think of objects falling, water finding the lowest level, etc.). If we can somehow capture this ability of nature to find the lowest energy and formulate our problem so that the lowest energy corresponds to the minimum distance between cities, then the system will lower its energy towards the answer we want!

This is exactly what D-Wave AQO processors do. And where does the ‘quantum’ come in? Well it is widely believed that if the little spins behave quantum mechanically, they are better at finding the low energy state; that’s basically all there is to the quantum part! But I’ll just spend a little more time explaining some of the detail about how the ‘crank’ is turned (also known as the ‘algorithm’) on these processors.

A more detailed look at the AQO algorithm

The paper is mainly concerned with something known as the ‘minimum gap’. What is this? Well it is the thing that limits the speed of your computation. It is the ‘bottleneck’ in the computation, and it is what makes these problems very difficult to solve. As we turn the crank on our computation, there are points where the system can become confused and sometimes gets ‘stuck’ in the wrong answer. But quantum systems (like AQO processors) avoid this problem, as they naturally have a ‘gap’ between the right answer and the wrong answer at all times (the lowest energy state and a slightly higher energy state) during the computation. The gap is extremely important – if you built the same system of little spins without quantum mechanics, as you turn the crank the gap goes to zero at some points, and therefore it is highly likely that you’ll end up in the wrong answer. The computation has a chance of ‘following’ the wrong path!

This is shown in the diagram below. The first case is a system with quantum mechanics, and the second is one without quantum mechanics playing a role (classical):
.

.
So it is important to protect this precious quantum gap and understand where it comes from, and how it behaves as we run different computations on the processor.

However, the behavior of the gap as you increase the number of variables involved in the computation (like the number of cities in the TSP or the number of restrictions in the Water-Machine) is not easy to understand. It depends on the type of restrictions, how many there are, etc. What we DO know is that it is this ‘gap’ that determines the speed at which your computer can run (the smaller the gap, the more slowly you have to turn the crank), so the main focus of the paper is on calculating how this all-important gap depends on the number of variables in the problem. The behaviour of the gap, and therefore the AQO processor, is SIMULATED using another computing resource, which I’ll return to in a moment.

Why is this ‘gap’ so important if a regular system without a gap would still get a pretty good answer – it still ends up with a low energy, right?

True, but the gap really can make all the difference. Imagine as the number of variables increased that you could get a 10% better answer by running the problem on a quantum processor. It doesn’t sound like much, but for large companies who are interested in optimization, this 10% can translate into millions of dollars worth of savings. Think of transportation companies trying to schedule parcel deliveries, or hospitals trying to schedule and distribute resources after a natural disaster. That’s not just millions of dollars, it could be thousands of lives too. As as the problem size gets bigger, this saving might become even larger.

Why can’t we just measure the gap by running the processors? You just said there’s no point in simulating the Water-Machine if you can just build one!

We want to know what we expect to happen to the ‘gap’ before we try and build big processors to run the AQO algorithm. There are 2 reasons for this: Building real processors (unlike Water-Machines) is an expensive task, and you want to make sure that the results are going to be interesting before you go ahead and do it! Also, there is a slight chicken-and-egg situation. We need to know the size of the gap to know at what speed to run the AQO processors, so we cannot measure the gap by just running them, as we would have no idea how fast to turn the crank to get the algorithm to work properly!

Using large classical computers to simulate a quantum algorithm

One interesting result of this paper is that even though we have established that it is a good idea to simulate what the gap will do before running the processors, it turns out that this is not as easy as it sounds. Just to just calculate how this gap is expected to change with the number of variables is a HUGE computational task! It is like trying to simulate the Water-Machine on your computer. The AQO processor, just like the Water-Machine, calculates this gap as part of its natural operation. But the techniques that engineers, physicists and computer scientists use for simulating such systems on their regular general-purpose desktop machines are, at present, remarkably inefficient. The amount of computing power and time needed to simulate the AQO is incredible, and yet it is still well worth running these enormous simulations – ‘a stitch in time saves nine’ – to help our understanding of what happens when the real processors run the algorithm (which takes just a few seconds!).

The project to compute  ‘the gap’ undertaken in this paper uses a distributed computing platform known as AQUA@Home, which is built using the BOINC framework. This is a framework where people can dedicate spare CPU cycles from their local machines by running a piece of software in the background (it also has a funky screensaver as a bonus). The idea is very similar to that of SETI (SEarch for Extraterrestrial Intelligence, also run on BOINC) where users donate spare clock cycles to help analyse signals from deep space. A surprising amount of computing power can be harnessed via this method. AQUA gathered approximately 8000 cores for around 2 years thanks to a worldwide network of volunteers, resulting in a staggering 10^20 floating point operations being performed. The calculations of the results for the expected behavior of the gap are displayed on Figure 3 of the paper. And indeed, the gap seems to still be going strong even when the number of variables is increased to 128! So if we now turn the crank on our real processors at the rate decided by that gap, we should get the right answer….

Benchmarking

So, back to our original question. How do you measure the speed of a quantum computer? In fact, how do you measure the speed of any processor where a direct comparison of clock speed no longer applies? Well, as we have seen, it all comes down to how people measure the performance of these processors for different specialty tasks. The common approach to address this issue is to try out the processors on different tasks, effiectively ‘racing’ them against one another in a sort of ‘Microchip Olympic Games’. The tasks that form these games are known as standard benchmark problems. Some processors are better at certain tasks than others, and then it is just up to the buyer to decide what characteristics they want from their processor by looking at the results of these benchmarks.

Being a specialty optimization technique, we obviously decided to enter our results into the ‘optimizing’ event, to see how they compared to other approaches. Because we know that the speed of our computer is determined by this ‘gap’, we can compare how fast it would be to other algorithms. This is the second main result of the paper. Figure 4 in the paper (reproduced below) shows how the AQO approach compares to similar problem solving techniques already used to solve these types of hard problems in the real world. The AQO approach seems to be approximately 10,000 times faster for these optimization problems than the best approaches currently known. So now we have calculated how fast we are allowed to turn the crank on our real processors, and it turns out that it is significantly faster than what is currently being used. As a result of this, we know that it is definitely worth building such processors, so let’s go ahead and do it!

Having invested many month’s worth of clock cycles from computer clusters around the world (not to mention several years worth of scientists’ research time!), we can now turn that all important crank for just a few seconds, and through the results of the simulations performed, be confident in the answer that this fundamentally new type of computer gives us. I personally think that’s pretty neat.

16 thoughts on “How do you measure the speed of an adiabatic quantum computer?

  1. Very nice explanation. This is so interesting, I’m so curious, I have three questions.

    1) Are you not saying NP-hard when you mean NP-complete? Definition of NP-complete problem/language: i) It is NP (nondeterministic polynomial time) and ii) Any NP problem can be reduced to it in polynomial time. Definition of NP-hard: it fulfills ii), but we cannot prove i). TSP is NP-complete. Of course, it can be also considered being NP-hard, but I think this is not simply terminology, because I read somewhere here that 3SAT -which is NP-complete- can be mapped to QUBO and solved using AQC, and that’s cool because it means that AQC can be used to solve any NP problem. But what about problems that we don’t know if are in NP? Problems in PS like QBF? Can they be mapped to a form solvable by AQC? If not, better say NP-complete instead of NP-hard, I think. And what about undecidable problems like the halting problem? Of course, they cannot be solved by AQC, but note that the halting problem is NP-hard (if you think that this is not an optimization problem, think instead of the following undecidable optimization problem: given a size calculate which is the program -with no input- of a maximum length of that size that gives the biggest result; I don’t know if it is NP-hard, but I think problems of this kind could exist being both NP-hard and undecidable). So, is not better to say NP-complete to avoid confusion when dealing with what AQC can be good for?

    2) What is next in AQUA? A) Continue simulating current 128 qubits chip and refining data: is not enough? B) Simulate next projected chip: maybe they’re too many qubits C) Given technology limitations, simulate different architectures for a scalable chip: seems to be worth, but maybe you at DWave know it makes no sense, maybe you are pretty sure you have the best you can D) Given an architecture, simulate different chip sizes to figure out how scaling affects efficiency: should not be this our primary goal at AQUA?

    3) Could quantum computers of a given size help simulating a larger one?

  2. Hi Xuru,

    1) NP only includes decision problems, so the output is just “Yes” or “No”, and a further quirk is that the “nondeterministic polynomial time” only applies if the answer is “Yes” (Co-NP only includes if the answer is “No”). TSP is not usually defined as a decision problem, so it’s not in NP, and thus not NP-complete. I think it is also not FNP-complete (again depending on how you define TSP), since it needs the shortest route, not just below length L, but it should be in (FP^NP)-complete, which is included in NP-hard. QUBO is probably also (FP^NP)-complete. That said, I hate silly semantic details like that, which I think is why it seems like a vast majority of people seem to just say “NP-complete” when they really mean “NP-hard” or “(FP^NP)-complete”, haha. :)

    2) We’re currently simulating a new quantum algorithm I’ll be presenting at QAMF in 3 weeks. It would be really easy to run on a chip with the right knobs to turn (which may be feasible in a couple years), but it’s a royal pain in the butt to simulate, so the process of learning what parameters work decently is quite the challenge. I’m hoping that someone can come up with something better, too, since it’s pretty much the simplest non-brute-force algorithm. :) Also, I’ve stepped back from the 128-qubit simulations for now, since that sort of scale is just an extra layer of pain standing in the way of getting decent data; I don’t trust much to be reliable above 128 qubits.

    3) Hmm, I hadn’t thought about it in terms of these types of simulations until you mentioned it just now. It’s worth investigating.

  3. Pingback: How do you measure the speed of an Adiabatic Quantum Computer? « Physics and cake

  4. Hi Dickson,

    1) I don’t agree about much of what you say on this point. I will refer to the book “Introdution to Automata Theory, Languages, and Computation” of Hopcroft, Motwani and Ullman (chapters 10 and 11 of second edition). a) See explanation on page 450, “for all common problems that are NP-complete, their yes/no versions and optimization versions are equivalent in complexity, at least to within a polynomial”. b) TSP “decision version” is in NP (you can make a guess and verify it in polynomial time, so for a nondeterministic automata it’s a piece of cake), it’s also NP-complete (see proof on page 461), and by means of a), TSP “optimization version” also is NP-complete, meaning that a nondeterministic automata can perform TSP optimization and by it any NP decision -and again because of a) its counterpart optimization- in polynomial time. In today’s optimization pragmatics the distinction between NP-complete and NP-hard can be of little use, for in any case you mean something like exponential time. But when stating that ACQ could boost efficiency for NP-hard problems (regardless of meaning polynomial time or just a smaller exponential) the question of what do you mean by NP-hard gets significance. Determining which classes of problems will be affected by this boost is of great importance. What can be reducible to something that fits and runs on an ACQ chip? In which class lies QUBO? 3SAT is NP-complete…

    physicsandcake,

    I think you mean “Adiabatic quantum computing is the greatest idea in the history of humanity”. I agree there is a lot of room between polynomial time and exponential time with a given exponent. Here I find the same distinction, “NP-hard combinatorial optimization and NP-complete constraint satisfaction problems”; due to the arguments I give about 1), I think “NP-hard combinatorial optimization” is misleading, and something like “NP-complete optimization” should be used. Interesting also is that of thermal problems and Boltzmann-like output. I think that there is also a lot of room between optimal solution and best solution that classical computers can find in a given time.

  5. All I can suggest is please read the definitions of the relevant complexity classes:

    http://qwiki.stanford.edu/wiki/Complexity_Zoo:N#npc
    http://qwiki.stanford.edu/wiki/Complexity_Zoo:N#np
    http://qwiki.stanford.edu/wiki/Complexity_Zoo:N#npo

    … and that “equivalent in complexity” likely does not intend to imply “in the same complete complexity class”. If they did mean “in the same complete complexity class”, it’d be false by the above definitions, because the optimization version of TSP is, by the strict definition of NP, not in NP. Therefore, by the definition of NP-complete, it would be incorrect to say that the optimization version of TSP is in NP-complete. If you disagree with the above definitions, I welcome you to contact Scott Aaronson (the maintainer of Complexity Zoo) and debate with him, since this is the last I’ve got to say on the subject.

    Cheers.

  6. Xuru,

    That previous comment was a pingback by the way, so I didn’t mean anything as I didn’t write it :)

    The purpose of the post was really as an introduction to the subject. I suspect the majority of the intended audience would benefit very little from an in-depth discussion of NP-Complete versus NP-Hard problems.

    Not that things should ever be technically incorrect for the sake of simplicity, of course, I’m just saying that it was not something that was meant to be in the scope of this article, and for all practical purposes it makes very little difference to the ‘take away message’ at this level.

    Suz

  7. Thanks. Disagreeing with a definition is absurd. I see why optimization version is not in NP: despite a nonderteministic Turing machine can be built that calculates the optimal route (TSP) in polynomial time, strictly speaking it is not in NP because the answer it gives is not a yes or no, though the complexity is the same. Thinking about all this has given me a better idea of what can be applied AQC to.

  8. Won’t the computation time depend strongly on temperature? Also, as you add more variables, don’t you expect the minimum gap to decrease?

    • Hi Mike,

      The speed of the computation doesn’t really depend that much on temperature. If temperature is lower than the minimum gap, then it matters very little. If the temperature scale is larger than the minimum gap, then as the computation passes through the gap, the higher energy levels get populated according to a Boltzmann distribution.

      The time taken to thermalise at this point is much faster than the ‘adiabatic’ transit (i.e. speed of the computation), so you can’t really stop it from happening by going faster (at least not without invoking a whole load of other problems).

      This means that if T>minimum gap, whatever speed you run at, your final answer has a Boltzmann-like probability of being found in the next best answer, and next-next best, etc. If you run the computation enough times you will be able to get statistics about which one of these is likely to be the ground state. In practice, the computation is repeateded lots of times anyway, to check that the ground state is indeed the most probable one. So temperature affects the error, or confidence level, of the answer rather than the time taken to return that answer. You may have to run the computation more times at a higher temperture to get better statistics, and lower the error.

      The minimum gap does decrease w.r.t. number of variables (on average) but how it does so depends very much on the problem structure, and it is hard to know in advance which problems are going to be difficult to solve. This is one of the reasons that the AQUA simulation was perfomed, to get an overall flavour of how the gap behaves for many different problems.

  9. Dr. Suzanne Gildert, once again your explanation is one of the clearest and understandable I have seen. It show you really understand how this type of compute works. And are able to explain in a fashion that someone with little tech background in quantum computing can get an intuitive feel for how Dwave’s AQC works.thanks :)

  10. Dr Gildert, in this article you start out by describing NP-hard optimization problems, and then you go on to discuss quantum adiabatic computation. A naive reader might think these topics were related in some way, for example that you were claiming that quantum adiabatic computers can solve NP-hard optimization problems faster than classical computers. It might be worth putting a note in there to stop a less than careful reader from getting this mistaken impression.

  11. Hi Gareth

    Adiabatic quantum optimization is a class of algorithms for solving NP-hard optimization problems, so they are definitely related :).

    Your statement that AQO-based machines can’t solve optimization problems faster (by which I take it to mean with a scaling advantage, as the processors we’re building are even now, while still in their infancy, competitive in raw speed with CMOS based chips) than classical computers is incorrect. AQO is a large class of algorithms with a bunch of parameters you need to define before you can make any statements about performance. AQO is like simulated annealing, genetic algorithms, ant colony optimization, etc.. a class of related approaches that are almost always run as heuristics.

    An example where there is a provable scaling advantage for AQO over any classical algorithm is in unstructured search, where you can explicitly demonstrate a square root scaling advantage.

    To learn more about these fascinating physics-based approaches, a good introduction to the basic idea is Ed Farhi and co’s early Science article here http://www.sciencemag.org/cgi/content/short/292/5516/472, and data from running it on one of our chips (plus a detailed explanation of what a real chip is and does) is here http://prb.aps.org/abstract/PRB/v82/i2/e024511.

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s