The recent “How Quantum is the D-Wave Machine?” Shin et.al. paper

Generally I try to avoid commenting on ongoing scientific debates. My view is that good explanations survive scrutiny, and bad explanations do not, and that our role in bringing quantum computers kicking and screaming into the world is to, well, build quantum computers. If people love what we build, and we do everything we can to adjust our approach to make better and better gear for the people who love our computers, we have succeeded and I sleep well.

I am going to make an exception here. Many people have asked me specifically about the recent Shin et. al. paper, and I’d like to give you my perspective on it.

Science is about good explanations

In my world view, science is fundamentally about good explanations.

What does this mean? David Deutsch eloquently describes this point of view in this TED talk.  There is a transcript here. He proposes and defends the idea that progress comes from discovering good explanations for why things are the way they are. You would not be reading this right now if we had not come up with good explanations for what electrons are.

We can and should directly apply these ideas to the question of whether D-Wave processors are quantum or classical. From my perspective, if the correct explanation were ‘it’s classical’, that would be critical to know as quickly as possible, because we could then identify why this was so, and attempt to fix whatever was going wrong. That’s kind of my job. So I need to really understand this sort of thing.

Here are two competing explanations for experiments performed on D-Wave processors.

Explanation #1. D-Wave processors are inherently quantum mechanical, and described by open quantum systems models where the energy scale of the noise is much less than the energy scale of the central quantum system.

Explanation #2. D-Wave processors are inherently classical, and can be described by a classical model with no need to invoke quantum mechanics.

The Shin et. al. paper claims that Explanation #2 is a correct explanation of D-Wave processors. Let’s examine that claim.

Finding good explanations for experimental results

It is common practice that whenever an experiment is reported demonstrating quantum mechanical (or in general non-classical) effects, researchers look for classical models that can provide the same results. A successful theory, however, needs to explain all existing experimental results and not just a few select ones. For example, the classical model of light with the assumption of ether could successfully explain many experiments at the beginning of the 20th century. Only a few unexplained experiments were enough to lead to the emergence of special relativity.

In the case of finding good explanations for the experimental results available for D-Wave hardware, there is a treasure trove of experimental data available. Here is just a small sample. There are experimental results available on single qubits (Macroscopic Resonant Tunneling & Landau-Zener), two qubits (cotunneling) and multiple qubits (now up to about 500) (the eight qubit Nature paper, entanglementresults at 16 qubits, the Boixo et.al. paper).

Let’s see what we get when we apply our two competing explanations of what’s going on inside D-Wave processors to all of this data.

If we assume Explanation #1, we find that a single simple quantum model perfectly describes every single experiment ever done. In the case of the simpler data sets, experimental results agree with quantum mechanics with no free parameters, as it is possible to characterize every single term in the system’s Hamiltonian, including the noise terms.

Explanation #2 however completely fails on every single experiment listed above, except for the Boixo et.al. data (I’ll give you an explanation of why this is shortly). In particular, the eight qubit quantum entanglement measured in Lanting et al. can never be explained by such a model, which rules it out as an explanation of the underlying behavior of the device. Note that this is a stronger result than it’s simply a bad explanation — the model proposed in Shin et. al. makes a prediction about an experiment that you can easily perform on D-Wave processors that contradicts what is observed.

Why the model proposed works in describing the Boixo et.al. data

Because the Shin et. al. model makes predictions that contradict the experimental data for most of the experiments that have been performed on D-Wave chips, it is clearly not a correct explanation of what’s going on inside the processors. So what’s the explanation for the agreement in the case of the Boixo paper? Here’s a possibility, which we can test.

The experiment performed in the Boixo et. al. paper considered a specific use of the processors. This use involved solving a specifically chosen type of problem. It turns out that for this type of problem, multi-qubit quantum dynamics and therefore entanglement are not necessary for the hardware to reach good solutions. In other words, for this experiment, a Bad Explanation (a classical model) can be concocted that matches the results of a fully quantum system.

To be more specific, the Shin et. al. model replaces terms like J_{ij} \sigma^z_i \sigma^z_j with J_{ij} <\sigma^z_i><\sigma ^z_j>, where \sigma^z_i is a Pauli matrix and <\sigma^z_i> is the quantum average of \sigma^z_i. Since all quantum correlations are gone after such averaging, you can model <\sigma^z_i> as a classical magnetic moment in a 2D plane. But now it is clear that any experiments relying on multi-qubit quantum correlation and entanglement cannot be explained by this simple model.

I’ve proposed an explanation for the agreement between the Shin et.al. model and this particular experiment — that the hardware is fundamentally quantum, but for the particular problem type run, this won’t show up because the problem type is ‘easy’ (in the sense that good solutions can be found without requiring multi-qubit dynamics, and an incorrect classical model can be proposed that nevertheless agrees with the experimental data).

How do we test this explanation? We change the problem type to one where a fundamental difference in experimental outcome between the processor hardware and any classical model is expected. If the Shin et. al. model continues to describe what is observed in that situation, then we have a meaningful result that disagrees with the ‘hardware is quantum’ explanation. If it disagrees with experiment, that supports the ‘hardware is quantum’ and the ‘type of problem originally studied is expected to show the same experimental results for quantum and classical models so it’s just a bad choice if that’s your objective’ explanations.

So a very important test to help determine what is truly going on is to make this change, measure the results and see what’s up. I believe that some of the folks working on our systems are doing this now. Looking forward to seeing the results!

The best explanation we have now is that D-Wave processors are beautifully quantum mechanical

The explanation that D-Wave processors are fundamentally quantum mechanical beautifully explains every single experiment that has ever been performed on them. The degree of agreement is astonishing. The results on the smallest systems, such as the individual qubits, are like nothing I’ve ever seen in terms of agreement of theory and experiment. Some day these will be in textbooks as examples of open quantum systems.

No classical model has ever been proposed that simultaneously explains all of the experiments listed above.

The specific model proposed in Shin et.al. focuses only on one experiment for which there was no expectation of an experimental difference between quantum and classical models and completely (and from my perspective disingenuously) ignores the entire remainder of the mountains of experimental data on the device.

For these reasons, the Shin et.al. results have no validity and no importance.

As an aside, I was disappointed when I saw what they were proposing. I had heard through the grapevine that Umesh Vazirani was preparing some really cool classical model that described the data referred to above and I was actually pretty excited to see it.

When I saw how trivially wrong it was it was like opening a Christmas present and getting socks.

77 thoughts on “The recent “How Quantum is the D-Wave Machine?” Shin et.al. paper

  1. The authors of the paper need to familiarize themselves with the Occam’s razor. Basically what they’re trying to say is that D-wave somehow created a classical chip, capable of matching or surpassing the best Intel and Nvidia have to offer, using the same engineering approach with a budget that pales in comparison to theirs.

    • I don’t think you know as much about computing hardware as you think you do: special-purpose hardwired chips *routinely* outperform more general chips. (Wouldn’t it be weird if it were otherwise?)

      This is why one has, in one’s smartphones and laptops, a variety of non-CPU chips ranging from graphics cards, floating-point units (since mostly integrated with CPUs but historically separate), cryptography accelerators, media codecs implemented in hardware, FPGAs – and there’s still a whole wild and wooly field of ASICs one could look at, perhaps you’ve heard of how the Bitcoin community investe a few millions to move over to custom ASICs for hashing & completely obsoleted the Nvidia & ATI GPUs they were all using before?

      It’s really not hard to ‘create a classical chip, capable of matching or surpassing the best Intel and Nvidia have to offer’, especially when you *aren’t* using the ‘same engineering approach’.

      • It is EXTREMELY hard to do this if you need to develop your own fabrication process from the ground up (as you do if you’re going to move away from using TSMC and similar). No superconducting processor has ever been developed prior to the V6 chip that is competitive with state of the art silicon, and not for lack of trying.

      • Gwen, I wasn’t referring to specialized devices as hardware implementations designed for a specific task. The examples you gave are for generally non-programmable devices. What D-wave has developed is a bit more flexible than that. It is competitive with classical computers not simply on one task but a whole set of problems (optimization problems).

      • > No superconducting processor has ever been developed prior to the V6 chip that is competitive with state of the art silicon, and not for lack of trying.

        But now you’re shifting the goalposts from George’s claims that.

        Moving back to George:

        > The examples you gave are for generally non-programmable devices.

        Programmability is a spectrum. The more generality you give up, the faster you can go. CPUs are fairly slow but can run anything reasonably well, while GPUs are pretty powerful (and increasingly standard in a lot of non-graphics applications because they can be usefully programmed), but they can still be beaten by FPGAs for problems that don’t fit their programming model well, and FPGAs can themselves be beaten by ASICs for other problems, and you’re not even done there since one can still move to *analogue* hardware for potentially greater performance gains in the right domains…

        > It is competitive with classical computers not simply on one task but a whole set of problems (optimization problems).

        Which is a narrow specialization, just as I said… Or am I mistaken and adiabatic quantum computation is actually Turing-complete? Can I run Firefox on a Dwave chip? Will it let me run Quake as I do bootstrapped power simulations and chat on IRC? A Turing-complete CPU can run that. Can an adiabatic optimizer do that?

        Sol:

        > gwern: Why don’t you come up with your own peer-reviewed, fully-fledged, scientific theory (and published in a prestigious scientific journal), as to how the D-Wave chip WORKS classically?

        With defenders like these, does Dwave really need critics?

      • Gwern, I don’t know if d-wave 2 is a universal Turing machine or not but let’s assume that it’s not (I might be wrong):
        1. It’s only the second generation commercial quantum device, it’s already quite capable of solving optimization problems and it’s many times faster than the first one.
        2. If the 3rd and 4th generation scale the same way, and I don’t see why they wouldn’t, the implications will be huge. Creating faster and more complex neural nets, protein folding simulations, etc. will become much easier.
        3. If that doesn’t get you excited and you’re interested in running firefox, playing games or anything commercially available classical computers can already do very well, then you’re looking that the wrong place.

        Geordie can correct me if I’m wrong about its capabilities. What I know about the machine is based on information released by d-wave and the Google AI lab.

      • > 2. If the 3rd and 4th generation scale the same way, and I don’t see why they wouldn’t

        The speedup from specialization is generally one-time, that’s why they wouldn’t. To reuse the Bitcoin example, the transition from GPUs to ASICs meant they could go from megahashes to gigahashes/terahashes in one fell swoop, but past that, it’s back to painful incremental optimization aimed at energy efficiency which follows, at best, the energy version of Moore’s law. There isn’t going to be another jump to petahashes.

        > 3. If that doesn’t get you excited and you’re interested in running firefox

        Missing the point.

      • >The speedup from specialization is generally one-time, that’s why they wouldn’t.

        Well it looks like it wasn’t a one time speedup. Straight from Google’s AI lab:

        “At 509 qubits, the machine is about 35,500 times (!) faster than the best of these solvers. (You may have heard about a 3,600-fold speedup earlier, but that was on an older chip with only 439 qubits.[1] We got both numbers using the same protocol.[2])”

      • @Geordie

        What’s the point behind developing a superconducting chip that only competes against state of the art Silicon? It’s like spending ten million dollars to do as good as something that costs only $500.

        This is only good if you show potential of scaling performance up & price down rapidly enough that at some reasonable future point of time you end up really out-competing conventional Silicon on a dollar to dollar basis. Or at least do something useful that’s impossible to do with conventional chips.

        Now, to demonstrate such potential it’s crucial to show that your widget does something that’s non-classical enough else conventional Silicon can always keep up. It’s this crucial non-classicality that I don’t see D-Wave demonstrating yet.

        To just say “Hey we have fabricated a superconducting chip from scratch that can do something useful” may be OK but really doesn’t have the exciting promise to it of a real Quantum Computer.

        PS. If you really think people are choosing the wrong problems to benchmark with why don’t you declare a set of appropriate problems and let’s see if optimized special-purpose classical chips can beat the D-Wave chip on raw times or scaling or not.

      • @ Rahul: Well the original motivation for building superconducting processors had nothing to do with quantum computation. There were two factors.

        The first was that superconducting device timescales are shorter than semiconductor devices. Clocked superconducting digital circuits have been run at around 700GHz and it’s entirely feasible that you could build a complex ASIC type processor clocked at say 100GHz. For certain types of problems that type of advantage is significant.

        The second factor is that you can operate superconducting chips using tiny fractions of the power that semiconducting chips consume. Studies have been done of reasonable expectations of total all-in energy cost where the same computational load could have energy needs cut by factors of about 100, even with all of the cryogenics etc.

        The architecture we’ve build has both these advantages, even if you put aside scaling advantages.

        As to the choice of instance types for benchmarking — this work is underway! I expect that the groups doing this research will publish some results shortly (within a few months).

      • @Geordie:

        When you write that work on selecting the instance types for benchmarking is underway & needs a few months, I’m confused. Are you (or they) actually trying out problems till they find ones that show off D-Wave the best and then they’ll release those problems as the suitable test set?

        That sounds a bit unfair as a comparison set. Is it like we have a intuitive conviction that the D-Wave computer ought to be good at *something*. Just that we are not exactly sure what that something is just yet?

      • @ Rahul: I’d characterize the situation differently. What is being sought is understanding of the characteristics of problems where quantum annealing is the best algorithm to use. I think there actually is a pretty good understanding of this now, but it’s fairly recent, and the result of a ton of work by a lot of people.

        The step we’re on now is to verify that (a) we do know the underlying features of instances that make the quantum / classical separation appear, and (b) that this separation is seen empirically in hardware.

        Assuming these are both successful, then the project transitions into applying the technology to problems of that type that occur in industrial applications with new generations of processors. The area I’ve been focusing on myself is in training neural nets.

      • @Geordie:

        Since you mentioned Power Consumption advantages of Superconducting Chips as a original goal, what is the power consumption of your current chip.

        Say, when you use it on these neural nets have you done a comparison of D-Wave versus a regular commodity Silicon chip?

        We seem to focus on quantumness but I’ve never seen much discussion of Power. Have I missed a relevant blog post?

    • “capable of matching or surpassing the best Intel and Nvidia have to offer”

      George: Troyer and his group have clearly show that they can outperform D-Wave Two using a GPU. And since there is no proven scaling advantage this will persist for larger number of qubits as well.

      • HI Michael! I don’t believe this is true. Here are some reasons. (a) there is a LOT of room to improve the ‘prefactor’ in this type of design. Because superconducting technology is so immature, it is probably possible to drop this prefactor by another factor of 10 or so each year for the foreseeable future (3-5 years at least), even as the device count doubles each year. (b) the scaling that’s being observed in the current generations of processors IS NOT THE INTRINSIC SCALING OF THE UNDERLYING QUANTUM ALGORITHM. It’s the empirically measured scaling, which is different. There is strong evidence that the scaling behavior is dominated currently by what we call Intrinsic Control Errors (ICE), which are mis-specifications of the problem parameters. As the ICE is reduced, the scaling curves observed will flatten out — we saw this on Rainier where in going from R4 to R7 each generation the scaling curves changed. (c) as I discussed above, the problem type studied in the work you’re thinking of is not the best one to use – other instance classes will see different relative performance; (d) it is straightforward to modify the processors to increase the number of couplers per qubit (the connectivity), reduce the ICE, add new types of device (such as XZ couplers that would make the Hamiltonian universal for quantum computation), etc. There is a misconception that there is “A D-Wave Machine”. This isn’t correct — we build D-Wave to be a process, not a specific system. Our fundamental technology strategy is to evolve processors using high throughput experimentation. Every three months or so we release a new design incorporating what we’ve learned.

  2. gwern: Why don’t you come up with your own peer-reviewed, fully-fledged, scientific theory (and published in a prestigious scientific journal), as to how the D-Wave chip WORKS classically? If and when you do that, then will take you seriously!.

  3. Pingback: D-Wave Systems Responds to “How “Quantum” is the D-Wave Machine?” by Shin, Smith, Smolin, Vazirani | Thinking Machine Blog

  4. Pingback: Shtetl-Optimized » Blog Archive » More “tweets”

  5. gwern: I think you should read this paper first about the architecture of the D-Wave chip, then give us your scientific explanation as to its inner workings, then lecture us about your “prodigious” knowledge of various chips…etc. After you read the paper, if you can understand it, give us your critique of it!. Here is the paper & good luck. http://arxiv.org/pdf/1401.5504v1.pdf

    • Sol, hiding behind a paper is not a defense. You don’t need to be Orson Welles to find a movie bad, and you don’t need to be a farmer to know cow poop smells bad. Custom hard-wired chips are always faster than a general-purpose CPU. The onus is not on critics to provide a fully-fleshed out theory of what is going on, but on D-Wave to demonstrate quantum speedups.

  6. >The speedup from specialization is generally one-time, that’s why they wouldn’t.

    Well it looks like it wasn’t a one time speedup. Straight from Google’s AI lab:

    “At 509 qubits, the machine is about 35,500 times (!) faster than the best of these solvers. (You may have heard about a 3,600-fold speedup earlier, but that was on an older chip with only 439 qubits.[1] We got both numbers using the same protocol.[2])”

  7. gwern: Your non-answer above is a cop-out!. Remember the title of this blog, namely, “How quantum is D-Wave”?. If you skeptics & doubters assert that it’s NOT using quantum mechanics to do its computations, then it must be doing something “classical”. And if you believe that, then the onus in you guys to come up with a classical explanation of the workings of this chip. D-Wave & its supporters firmly believe that it’s utilizing quantum mechanical effects, based on numerous scientific papers put out by D-Wave, USC and outside scientists. It’s the science we go by and that guides us. Now it’s you guys turn to come up with alternate scientific explanation as to how this chip is working at all!. The current speed of this generation chip is a secondary question. I don’t need more arguments, but I need a SCIENTIFIC EXPLANATION.

    • While gwern really is doing a great job here, I thought I might butt in, considering I do research in the field of quantum computation.

      Just to straighten out terminology: of course it’s using quantum mechanics to do its computations. The outstanding question is whether or not it’s making use of quantum mechanics *productively*–i.e., are there entanglements and coherences that allow the system to explore more of the solution space than a corresponding classical annealer might be able to?

      The most recent Boixo paper (http://arxiv.org/abs/1304.4595) seems to suggest that the d-wave doesn’t actually scale any better than a classical device for random-bond Ising instances. Geordie’s retreat (above) seems to be that researchers aren’t testing the right problem instances. But as far as I know, and this is really the most important point here: ***there aren’t any classes of problems that have been shown to scale better on the d-wave than on classical hardware***.

      That’s pretty damning. And you can shout all day about how you’ve made a great, specialized quantum super quantum we swear it’s quantum piece of hardware, but when its scaling behavior is no different than classical computers, a quantum computer you do not have.

      • The Dwave processor may not exhibit any obvious scaling yet due to it’s current architecture of sparsely connected qubits rather than the quantum mechanics. With more connections or couplers that could eventually show up. At least that’s what I gather from the Google blog:

      • @Ramsey44
        higher connectivity would certainly be interesting, but it might be difficult to change the layout of the d wave chip in such a way. Does D-Wave have plans to increase connectivity?

      • Michael: See me reply above to your earlier comment about connectivity — yes it can be changed (it has changed nearly every generation since we started).

      • calef: The problem is(and you guys don’t seem to either know it or acknowledge it) that even top-notch scientists working in this field DON’T really know which quantum mechanical effects play a significant role in computational “productivity!” Is entanglement very important? It’s NOT known. Is superposition important? It’s NOT known. Is tunnelling important? It’s known. Is discord important? It’s NOT known. Only through empirical evidence & experimentations on these prototype machines can we learn what’s important & useful, because not much is known about “open quantum systems”. I, therefore, think that it’s only fair that the process of experimentation should be allowed to go forward for at least a year or two, if not longer, to find out what’s important and what is not. So, guys, please be patient!. It will ALL be decided in the next few years, hopefully.

      • Hi Calef! The ‘of course it’s using quantum mechanics’ part is a recent thing. For quite a while we had the same type of reaction from some QUIT scientists about that question. I’m glad that that has been settled.

        Learning that the original problem type we picked to look at isn’t the best to show quantum / classical scaling differences isn’t a retreat. It’s science. We didn’t know that when we started. We learned something and we evolve. We don’t stop when there is a setback. We know that not everything is going to work. In fact, we assume that 99/100 things we (and everyone else in the field) try aren’t going to work.

        As to your assertion that there aren’t any classes of problems that have been shown to scale better on the current generation of chip: as you know there are many people working on this and trying new ideas. If you check out my response to Michael above, you’ll see some of the things we’re trying.

        As to setting expectations with a totally new architecture like this: it’s going to take a long time to figure all this out. If you really want to understand where we’re at, take some time to look through the architecture paper, and keep in mind that virtually anything in the design can be changed to overcome any fundamental obstacles that may arise as the design matures.

  8. I think IBM is feeling the heat! ;-) Can you blame them as Dwave’s machine is squarely directed at their core business? I think Dwave got their attention, and that says a lot about what Dwave has accomplished so far.

    As much as I wanted this to be about Science, we are in a business environment and the competition in the quantum computing field is fierce. In this environment perception also plays a part and frankly I think that’s just normal.

    I would like to see Dwave do that Ramsey number thingy ;-) and come up with a new unknown number. I believe they ran out of qubits the last time but maybe with Dwave 2 can do it or do we need to wait for the 1,024 qubit? I think that would be really exciting and I want to see the classical processors beat that! ;-)

  9. Scott Aaronson:

    “I think Geordie’s response obfuscates some basic points.

    Most importantly, there’s currently no known class of problems—as in not one, zero—on which the D-Wave machine gets any convincing speedup over what a classical computer can do. Random Ising spin problems were trumpeted as such a class by people misinterpreting the McGeoch-Wang paper (which D-Wave gave considerable publicity to). But then Troyer et al. found that, when you examine things more carefully, the speedup claims evaporate completely. So for Geordie to now say, “oh, that was never a good class of instances to look at anyway” takes some impressive chutzpah. Pray tell, then what is a good class of instances, where you get a genuine speedup? And if you can’t identify such a class, then do you retract all the claims you’ve made in the last year about currently being able to outperform classical computers?

    Moving on to the Shin et al. paper, I actually agree with Geordie that the 8-qubit experiments done a couple years ago made a pretty good prima-facie case for small-scale entanglement in the D-Wave devices: that is, entanglement within each “cluster” in the Chimera graph. (And in any case, we already knew from the Schoelkopf group’s experiments that current technology can indeed entangle small numbers of superconducting qubits.)

    But what about large-scale entanglement, involving all 100 or 500 qubits? As far as I know, the only evidence we had that such entanglement was present really did come from the Boixo et al. correlation analysis. And those correlation patterns are precisely what Shin et al. now say they can reproduce with a classical model. Of course, that doesn’t prove that there’s no large-scale entanglement in the D-Wave machine, but it does undermine the main piece of evidence we had for such entanglement.

    Most tellingly, Matthias Troyer—one of the lead authors of the Boixo et al. paper, and someone I trust as possibly the most evenhanded authority in this entire business—tells me that the Shin et al. paper caused him to change his views: he didn’t know if the correlation patterns could be reproduced in any reasonable model without entanglement, and now he knows that it’s possible.

    It’s worth stressing, again, that even if large-scale entanglement is present in the D-Wave machine, one can still use the Quantum Monte Carlo method from the Boixo et al. paper, so there’s still no reason to expect any speedup over classical computing with the current technology. In other words: in the “debate” between Boixo et al. and Shin et al., both sides completely agree about the lack of speedup. The only point of disagreement is whether the lack of speedup is because D-Wave is doing a form of quantum annealing that can be efficiently simulated classically, or because they’re basically doing classical annealing. (More to the point: is there large-scale entanglement in the device or isn’t there?)

    So, here’s a summary of what we currently know about the D-Wave devices:

    1. Small-scale entanglement. Pretty good evidence that it’s present (though still not a smoking gun).

    2. Large-scale entanglement. Who the hell knows?

    3. Speedup over classical computers. No evidence whatsoever, and indeed careful comparisons have repeatedly found no speedup (though D-Wave’s supporters keep moving the goalposts).”

  10. Pingback: One Video to Rule Them All | Wavewatching

  11. Rahul: The D-Wave 2 uses about 15 kW/h(or about 15,000 watts per hour). Compare this to the fastest Supercomputer in the US, the so-called “TITAN” at Oak Ridge National Lab., which uses about 8.2 mW/h(or about 8,200,00 watts per hour).

    • Why did you think of the TITAN as a suitable benchmark? I’d have thought one of the machines that McGeoch / Wang or Mathias Troyer or someone like that used to run the competing classical runs for CPLEX or Geordie’s neural networks or something like that is a more reasonable comparison?

      • Rahul: Those figures & comparisons are from NASA “QuAIL” lab. They compared DW2 power consumption to the average consumption of the10 top supercomputers in the US. You can look it up if you wish. I just gave you the top supercomputer’s power consumption.

      • Ok, Thanks! The way I see it, Troyer & others were using off-the-shelf Intel servers to run their classical CPLEX runs. At worst that’d draw a kiloWatt.

        In which case, I’m still confused about Geordie’s assertion earlier that one crucial advantage of the D-Wave architecture is that “you can operate superconducting chips using tiny fractions of the power that semiconducting chips consume.”

        Tiny fraction? D-Wave seems to be drawing 15x as much power as the closest conventional Silicon competitor?

      • The cooling system uses most of that power since the chip has to be cooled down to about 20mK. It’s really difficult to get down to such low temperatures. The chip itself uses next to nothing. Because of that the next generations will use practically the same power consumption.

  12. Pingback: Shtetl-Optimized » Blog Archive » Umesh Vazirani’s response to Geordie Rose

  13. This blog entry makes a fallacious use of Occam’s razor. Science is about good explanations, but the explanation can change depending on the scale at which the experiment is performed. If I zoom in enough on a bicycle, I will start seeing electrons bound to nuclei and will correctly conclude that I need quantum mechanics to explain what I see. Moreover, quantum mechanics can correctly predicts the behaviour of the bicycle at a human scale. Should I conclude that the bicycle is quantum mechanical on a human scale because it is the only theory that ‘perfectly describes every single experiment ever done’? The logic used in this blog entry suggests you should.

    Experiments on a few qubits have shown quantum mechanical effects, but the point of Shin et al. is that on large scales (where there is a potential for computational applications) there is an alternative classical explanation.

    • +1

      Some of these arguments are like insisting there’s quantum behavior in my Laptop. Sure there is if you drilled deep down. But does this quantumness have anything at all to do with why my Laptop performs better than my neighbors who doesn’t insist that his is a Quantum Laptop?

      • This post is helpful to those who came pretty late in this debate and too lazy to read, otherwise, thanks captain obvious!

      • @Geordie: It is not clear to me how such a demonstration can be done, but sure, the moment I see convincing scientific data demonstrating that there is large scale quantum behaviour I’ll be happy to comment about it. I’ll also be happy to comment when you demonstrate a computational speed-up, regardless of its quantum or classical origin. In general, I value the type of research that you (D-wave) are doing and I would be thrilled to see something useful coming out of it. Already, the agreement between the experimental data and QMC simulations was a real surprise to me which further stimulated my interest.

        For the moment however, I am hugely disappointed by the harm that you are causing to our field by misleading the public, the current blog post being only the tip of the iceberg. It’s hard to imagine how you could have written this post without realizing you were misleading your reader with a fallacy (but I’m willing to give you the benefit of the doubt). Moreover, I attended a presentation by D-Wave’s Colin Williams and Murray Thom this week at the University of Sydney, and they too were misleading their audience, e.g., by insinuating that a quantum computer can efficiently solve NP-complete problems, or ignoring Sanjebb Dash’s results when presenting the fastest commercially-available SAT solver (which according to them are 11,000 times slower than D-Wave 2), or ignoring Matthias Troyer’s classical simulated annealer for that matter.

        The research program you are pursuing is very ambitious and you have already produced interesting results. Let these results speak for themselves, no need to distort the facts.

      • @ David: I respect you for having the courage to voice your opinions publicly and put your name on it. This field needs more of that.

        The only thing I can ask of you is to try to keep an open mind and realize that you may be wrong. You are hearing what we’re saying through some sort of weird filter that is distorting the message. No-one has ever claimed that this approach does any of the things you wrote above, and the particular post you claim is built on a fallacy is not.

        Another thing — quantum computing is not ‘your’ field. Often I hear this phrase from the ‘old guard’ in QC. It carries an implication that an effort to build machines must get permission or validation from you in order to be successful. That is not the case, and it would serve you and your colleagues well to understand this.

      • @Geordie:

        That’s the most bizarre misinterpretation of the phrase “our field” I’ve seen. When I say “our neighborhood” or “our city” or “our company” that doesn’t imply my having exclusive ownership or anyone needing permission to enter my community.

        In most cases it connotes a sense of belonging to a cohort. I’ve no clue why David’s response offends you. There’s no question of permissions but as to validation I’ve no idea why that’s a sore spot for you. Most new technologies go through a phase of intense scrutiny, validation and often criticism. That’s how things ought to be, I think. It’s a feature not a bug.

  14. Dear Geordie

    Please let me say that I (and many people) enjoy both your Hack the Multiverse weblog and the spirited responses that your writings elicit from the quantum old guard (Scott Aaronson most especially).

    Young people (especially) should appreciate that the STEM questions being debated are unlikely to result in right-or-wrong, yes-or-no, black-or-white answers. Fortunately! Because the interests of my own profession (medical researcher) — and many other professions too — would be very badly served by any binary outcome to this debate.

    To advance this point-of-view, I have posted to Scott Aaronson’s Shtetl Optimized website — in response to the recent Shin et al preprint (arXiv:1401.7087) — a mathematically and technologically plausible scenario (as it seems to me) in which D-Wave’s business model prospers exceedingly, and yet the “old school” academic enterprises of BosonSampling and Shor Factor prosper too.

    If by this medically-biased narrative some young Juliet-engineer (among the D-Wave Capulets) establishes a collegial partnership with some young Romeo-mathematician (among the Shtetl Optimized Montagues), then the post’s purpose is served. Although come to think, Shakespeare’s Friar Laurence (who arranged Juliet and Romeo’s covert marriage) arguably did neither of these young lovers any great favor.

    In any event, please accept this sincere appreciation of D-Wave’s strategically central computational objectives, and of D-Wave’s cumulative innovations in engineering, science and mathematics that are advances so impressively toward those objectives, along with my hope, and confident expectation too, that these advances will continue, at an accelerating pace, through many years and decades to come.

    • Students (especially) who prefer not to wait until April for the slow workings of the Shtetl Optimized pipeline can contemplate medical/engineering perspectives upon classical/quantum computation at these (Google Drive) links:

      •  There’s Plenty of Room in the Middle

      •  Variation-Distance Considered Misleading

      To read a type-set preview, you may temporarily copy-and-paste these essays’ comment-codes into Shtetl Optimize’s real-time comment-previewer (here … but please don’t post either essay yourself).

      Aside  A realtime MathJax/LaTeX equation-viewer (like Shtetl Optimized’s) would make it very much easier to convey mathematical points here on Hack the Multiverse.

      Conclusion  Most importantly — from a practical medical research perspective — please accept this appreciation and thanks in regard to D-Wave’s outstandingly foresighted enterprise objectives, scientific means, and engineering capabilities, which in aggregate are sufficiently encompassing in width, in depth, and in context that (as it seems to me) D-Wave and its skeptics are destined eventually to be allies, partners, and even friends. That would be GOOD for EVERYONE!

    • Romeo and Juliet? Once a romantic always a romantic ;-)

      I’m with you doc! Don’t know why theres so much bad blood in this field. I for one welcome a healthy debate and hard science to come out of this that benefits everyone and advance science. I don’t see why this exploration can’t benefit all fields of quantum research, may it be AQC, gate model, etc. We need to do more of research and less of drama.

      Looking forward for more exciting discoveries from Dwave :)

    • ramsey44 wonders  “Don’t know why there’s so much bad blood in this field [QC/QIT].”

      At least some quantum-related scholarly rancor has been fostered by (what amounts to) an accident of scientific history.

      Back in 2002 and 2004 a team of quantum researchers (who as individuals were outstandingly qualified) produced the Quantum Information Science and Technology (QIST) Roadmaps (LANK reports LA-UR-02-6900 and LA-UR-04-1778).

      At least one school of thought considers that the 2002 and 2004 QIST roadmaps already were (and still are) an outstandingly valuable scientific resource, such that very considerable further value could be realized if the initial QIST intent “to revisit these important issues in future updates” were followed-up.

      Unsurprisingly, not everyone agrees with this pro-QIST assessment … not that universal agreement is necessary, or feasible, or even desirable in science!

      In any event, for whatever reason(s), no subsequent QIST updates appeared. One regrettable consequence has been that quantum scientific issues that can be openly surveyed and responsibly debated QIST-style, by panels of stakeholders representing diverse viewpoints — to everyone’s lasting benefit — have instead been discussed (in the public’s eye) mainly in the blogosphere.

      The result has been plenty of vigorous open discourse (which is good) accompanied by considerable portions of unbalanced and/or inexpert rhetoric (which is not so good).

      Overall, the blogosphere’s various quantum debates have (arguably) not advanced quantum research even as effectively as the original QIST roadmaps.

      Conclusion  It’s time for QUIST II.

      • Interesting, I didn’t know they actually attempted something like this (QIST) but we’re in the age of Internet so it’s a wild wild west ;-) I think having a vigorous scientific debate is also good even if it’s mildly rhetorical. When there’s too much however, it can be unfortunate because we get distracted it deteriorates our ability to have a clear conversation.

    • In regard to QIST-II, a mainly-for-fun exercise — that points too toward serious research/enterprise opportunities — is to systematically transform Paul Grahams’s celebrated Y-Combinator essays into quantum-physics essays by the following recipes:

      Recipe 1  Transpose Paul Graham’s essay Beating the Averages (April 2003) into a quantum essay Hilbert-Style quantum dynamics is Blub; Grothendieck-style varietal dynamics is Lisp.

      Recipe 2  Transpose Paul Graham’s essay Black Swan Farming (September 2012) into a quantum essay The Black Swans of Linus Pauling, John von Neumann, and Richard Feynman.

      Recipe 3  Transpose Graham’s essay Frighteningly Ambitious Startup Ideas (March 2012) into a quantum essay QIST-II’s Terrifyingly Transformational Roadmap.

      In regard to Recipe 3, Graham provides sensible advice:

      You’d expect big startup ideas to be attractive, but actually they tend to repel you. And that has a bunch of consequences. It means these ideas are invisible to most people who try to think of startup ideas, because their subconscious filters them out. Even the most ambitious people are probably best off approaching them obliquely. […]

      Empirically, the way to do really big things seems to be to start with deceptively small things. Want to dominate microcomputer software? Start by writing a Basic interpreter for a machine with a few thousand users. Want to make the universal web site? Start by building a site for Harvard undergrads to stalk one another.

      Conclusion  D-Wave’s present quantum devices and quantum systems engineering philosophy plausibly comprise — as it seems to me and many folks — “big things that are starting as deceptively small things” … precisely in the sense of Paul Graham’s essays.

    • @ John: The types of people we work with aren’t confined to one particular discipline — there are many world class mathematicians, computer scientists, and physicists here as well. What is true of everyone here is that we are all committed to doing everything we can to bring quantum computers into the world — that’s our objective, and we’re going to achieve it.

      On the QUIST roadmap topic — I don’t think it matters much one way or the other whether there is one. The only efforts that matter in building new types of computers are substantial focused ones like D-Wave — the minimum buy-in for a serious effort is ~$100M, and the actual expense you can expect to invest before full traction would be an order of magnitude more than that. Anything less than that has zero chance of making a difference in a capital intensive business like introducing an entirely new computing paradigm. Any substantial effort will have their own internal roadmap.

      • Geordie, your points are cogent (and I agree with all of them). And yet it’s fair too to wonder whether additional points might be considered. For example, in regard to the proposition “the minimum buy-in for a serious effort [in building new types of computers] is ~$100M”; this is (after all) not such a very large investment when amortized over seven billion people that include (say) 5×10^7 STEM professionals. Moreover, the investment threshold associated to algorithmic advances is lower still … Bill Gates’ $30M investment in Schrödinger LLC is one example.

        When we consider the monotonic stock-price run-up in the computational simulation sector over the past decade (e.g., ANSYS), these simulation-centric computation-centric investment strategies look pretty good. Particularly because Schrödinger-type algorithmic advances plausibly will accelerate D-Wave-type hardware advances, and vice versa.

        So please accept my very best wishes, and confident expectations too, for D-Wave’s continued success.

  15. Pingback: La respuesta de D-Wave a la crítica a sus ordenadores "cuánticos" | La Ciencia de la Mula Francis

  16. Pingback: He Said She Said – How Blogs are Changing the Scientific Discourse | Wavewatching

  17. hi G-man! have been following your exploits 2ndhand/from sidelines since the very beginnings of Dwave over a decade ago…. should have said something sooner…. just wanna say no matter what the critics/armchair quarterbacks/pioneer wannabes say….. you guys are doing some of the coolest hardcore science & engr in the world…. dont know if you will succeed in the long run (in the long run we are all dead! –keynes!) but you have more cojones than a whole seminar of Phd scientists….. keep up the good work! …. dwave/inception of qm computing dream … you make Babbage look like a mere amateur/crank! :razz:

  18. Pingback: D-Wave: Is $15m machine a glimpse of future computing? | OBDM

  19. Pingback: D-Wave: Is $15m machine a glimpse of future computing? - Standard News - Standard News

  20. Pingback: D-Wave: Is $15m machine a glimpse of future computing? – BBC News | Techy News Today

  21. Pingback: Quantum tech disappoints, but only because we don’t get it | News Alternative

  22. Pingback: Kwantumdeskundige door Google ingehuurd - Geleerd uitschotGeleerd uitschot

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s