Six interesting findings from recent benchmarking results

Around May 15th of 2013 Google acquired a system built around a 509-qubit Vesuvius 6 (V6) chip. Since it went online, they have been running it 24/7 at 100% usage. Most of this time has been committed to benchmarking.

Some of these results have been published, and there has been some discussion of what it all means. Here I’d like to provide my own view of where I think we are, and what these results show.

Interesting finding #1: V6 is the first superconducting processor competitive with state of the art semiconducting processors.

Processors made out of superconductors have very interesting properties. The two that have historically driven interest are that they can be extremely fast, and they can operate without requiring lots of power. Interestingly they can even be run close to thermodynamical reversibility — with zero heat generation. There was a serious attempt to make superconducting processors work, at IBM from 1969 to 1983you can read a great first hand account of it here. Unfortunately the technology was not mature enough, semiconducting approaches were immensely profitable at the time, and the effort failed. Subsequently there has been much talk about doing something similar but with our new knowledge, but no-one has followed through.

It is difficult to find the amount of investment that has gone into superconducting processor R&D. As best I can count, the number is about $4B. We account for about 3% of that number; IBM about 15%; and government sponsorship of basic research, primarily in Japan, US and Europe the remainder. Depending on your perspective, this might sound like a lot, or like a very small number — for example, a single TSMC state of the art semiconductor fabrication facility costs about six times this (~$25B) to build. The total investment in semiconductor fabrication facilities and equipment since the early days of Fairchild Semi is now approaching $1T — yes, T as in Trillion. That doesn’t include any of the investment in actual processors — just the costs of building fabrication facilities.

The results that were recently published in the Ronnow et. al. paper show that V6 is competitive with what’s arguably the most highly optimized semiconductor based solution possible today, even on a problem type that in hindsight was a bad choice. A fact that has not gotten as much coverage as it probably should is that V6 beats this competitor both in wallclock time and scaling for certain problem types. That is a truly astonishing achievement. Mattias Troyer and his team achieved an incredible level of optimization with his simulated annealing code, achieving 200 spin updates per nanosecond using a GPU based approach. The ‘out of the box’ unoptimized V6 system beats this approach for some problem types, and even for problem types where it doesn’t do so well (like the ones described in the Ronnow paper) it holds its own, and even wins in some cases.

This is a remarkable historic achievement. It’s the first delivery on the promise of superconducting processors.

Interesting finding #2: V6 is the first computing system using ideas from quantum information science competitive with the best classical computing systems.

Much like in the case of superconducting processors, the field of quantum computing has promised to provide new ways of doing things that are superior to the ways things are now. And much like superconducting processors, the actual delivery on that promise has been virtually non-existent.

The results of the recent studies show that V6 is the first computing system that uses ideas from quantum information science that is competitive with the best classical algorithms known run on the fastest modern processors available.

This is also a remarkable and historic achievement. It’s the first delivery on the promise of quantum computation.

Interesting finding #3: The problem type chosen for the benchmarking was wrong.

The type of problem that the Ronnow paper looked at — random spin glasses — made a lot of sense when the project began. Unfortunately about midway through the project it was discovered that this type of problem was expected theoretically to show no difference in scaling between simulated annealing (SA) and quantum annealing (QA). This analysis showed that it was necessary to add structure to the problem instances to see a scaling difference between the two. So if an analysis of the D-Wave approach has as its objective observing a scaling difference between SA and QA, random spin glass problems are the wrong choice.

Interesting finding #4: Google seems to love their machine.

Last week Google released a blog post about their benchmarking efforts that provide an overview of how they feel about what they’ve been seeing. Here are some key points they raise in that post.

  • In an early test we dialed up random instances and pitted the machine against popular off-the-shelf solvers — Tabu Search, Akmaxsat and CPLEX. At 509 qubits, the machine is about 35,500 times (!) faster than the best of these solvers.

This is an important result. Beating a trillion dollars worth of investment with only the second generation of an entirely new computing paradigm by 35,500 times is a pretty damn awesome achievement. NOTE FOR EXPERTS: CPLEX was NOT run in these tests to global optimality. It was run in a mode where it was timed to the time it found a target solution, and not to the time it took to prove global optimality. In addition, Tabu Search is nearly always the best tool if you don’t know the structure of the QUBO problem you are solving. Beating it by this much is a Big Deal.

  • For each classical solver, there are problems for which the hardware does much better.

This is extremely cool also. Even though we are now talking about the best solvers we know how to create, our Vesuvius chip, with about 0.001% of the investment of its competitor, is holding its own.

  • A principal reason the portfolio solver is still competitive right now is actually rather mundane — the qubits in the current chip are still only sparsely connected.

This is really important to understand — making the D-Wave technology better is likely about making the problems being solved more rich by adding more couplers to the chip, which is just an engineering issue that is nearly completely decoupled from other things like the role of quantum mechanics in all of this. It is really straightforward to make this change.

  • Eyeballing this treasure trove of data, we’re now trying to identify a class of problems for which the current quantum hardware might outperform all known classical solvers.

Now this is really cool. Even for Vesuvius there might be problems for which no known classical computer can compete!

Interesting finding #5: The system has been running 24/7 with not even a second of downtime for about six months.

This is also worth pointing out, as it’s quite a complex machine with the business end at or around 10 millikelvin. This aspect of the machine isn’t as sexy as some of the other issues typically discussed, but it’s evidence that the underlying engineering of the system is really pretty awesome.

Interesting finding #6: The technology has come a long way in a short period of time.

None of the above points were true last year. The discussion is now about whether we can beat any possible computer — even though it’s really only the second generation of an entirely new computing paradigm, built on a shoestring budget.

The next few generations of chip should push us way past this threshold — this is by far the most interesting time in the 15 year history of this project.

20 thoughts on “Six interesting findings from recent benchmarking results

      • Roger that Geordie. The multi-disciplinary aspects of such an undertaking makes me think of this as a modern day Manhattan project. Undoubtedly books will be written on this subject and how on earth you managed to get the job done. From Burnaby . . . just makes me break out in a big grin.

  1. Pretty remarkable stuff, especially compared what gate model systems try to compete against! Hopefully more research goes into adiabatic QC as more progress continues from d-wave!

    • Research dollars (even true basic research dollars) follow ideas that are useful and productive. I’ve bet the last 15 years of my life on the (until quite recently completely speculative) notion that quantum information science can help us make better computers, and I think that we’ve demonstrated that this is true in real life (as opposed to theoretical arguments and powerpoints). It’s going to take a while for the successes that we’re having to “take”, but once this happens there will be a significant increase in funding for quantum information across the board.

  2. It frustrates me that the media as well as the general public are really trying to put a hard number or speedup in terms of general computation while completely ignoring the problems that this machine excels solving. It’s like witnessing the launch of Apollo 11 and saying “A race car can still go faster around a circuit”.

    • Well I think that what’s been achieved really is a major accomplishment. It might take a while for the noise to die down, but I’m hopeful that when it does what’s being done here will be appreciated. The next few generations of systems should be very interesting.

  3. There’s an interesting take at the Thinking Machine Blog, which echoes what I was thinking: “What is perplexing about many of these articles is the emotional tone. For some reason, it appears that some people have an ax to grind regarding D-Wave resulting in articles that are biased.” any comments on this?

    • I have sympathy for the journalists trying to cover this. It’s complicated. The underlying technology is extremely novel. The bottom line is that the negativity in these articles reflects a failure on our part to communicate effectively what’s been achieved. We’re working on doing this.

  4. Pingback: Scott Aaronson (again) resigns as chief D-Wave critic and endorses their experiments | Wavewatching

  5. Geordie, congrats on your success! Keeping in mind I’m a lay person when it comes to this stuff- any broad estimates on the speed up you expect for future DWAVE machines? I believe I read going from 128 to 509 qubits resulted in up to a 300,000x increase depending on the problem asked. Is that same rate of increase expected to continue?

    • Hi Daniel! Thanks!!! Re. the speed-up: really what we need to know is the relative performance of the best known classical solvers vs. our approach on real world problems, such as those that arise in the types of deep learning scenarios I’ve been describing in earlier posts. The problem type that’s created the latest tempest in a teapot isn’t interesting. Likely what we’ll do is compare the new 1000 qubit chips vs. the best classical solvers on problems with structure and see what we see. Based on initial results I think we’ll handily win on both wallclock time and scaling… but until we see the results it’s impossible to know!

  6. Glad to see the response. I have posted it. I am very interested to see the improvements that will made be made over the next 18 months or so as we see a maturing 1024 qubit design and the 2048 qubit design. With the new Cypress semiconductor deal which I saw meant that Dwave would be able fabricate new chip designs every month. So the densification of connections should be able to be greatly advanced over the next few monthly designs.

    Will there be error correction included ? Some academics seem to think that would help speedup the Dwave chips.

    There are substantially different superconducting quantum designs that are being produced by other groups. Would Dwave be able to incorporate anything from any of that work ?

    There have been proposals to scale trapped ion to a million qubits (still a long way away as they are at 20 or so qubits now.) But it was a modular proposal. Any comment ?

    If the work in Alberta on dangling bond quantum dots make big progress would Dwave shift or test a completely new approach ? It seems that the original plan of having all of the best quantum computer people and the patents would allow Dwave to pivot to some new approaches to quantum systems, if those allowed for possibly superior qubits and more scaling. Assuming those technologies could catch up and progress more rapidly.

    Does the 1024 qubit chip mean that Dwave now has the staff and process to scale at doubling qubit chips every year instead of quadrupling every two years ?

    2014 1024 qubits
    2015 2048 qubits
    2016 4096 qubits etc…

  7. Comment regarding #3: As written, this claim is incorrect. Spin glasses are likely the best benchmark to run. Just not the standard vanilla random couplings on the Chimera topology. The couplings need to be chosen carefully in an unbiased way such that the problem becomes harder. Please look carefully at the provided link to the preprint.

  8. Pingback: D-Wave Systems Comments Regarding Google’s Test of the D-Wave 2 Adiabatic Quantum Computer | Thinking Machine Blog

  9. On the google blog they wrote the following regarding the structured problems:
    “In this example of a structured problem, all variables within one unit cell of the Chimera graph are negatively (ferromagnetically) coupled, while unit cells are coupled randomly to each other.”

    Maybe one should not use SA to solve such a problem on a classical computer. You could devise an algorithm which flips the whole state of each unit cell and not just single spins. Or you could try to see how far simulated quantum annealing gets you.

    • Hi Michael! Yes you could use SA effectively on that specific problem. However the underlying issue is that SA using single spin flips fails because there are barriers between minima in that case — so the specific problem isn’t what’s interesting, it’s the underlying cause of the failure, which is knowledge that can be used to understand what types of problem QA will be most effective on and why.

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s