More on the TR interview

Back from choking some folks out. Gots ta represent.

OK in this second installment of Geordie’s Easter Weekend Vent-a-thon, I’d like to offer some food for thought related to Scott Aaronson’s latest blog post. Specifically I’d like to take him to task on his rationalization of why he is so crusty, the most recent examples of which are:

1. I asked Geordie Rose (the founder of D-Wave and sometime commenter on this blog) twice for actual information about the speedup D-Wave claimed to be seeing over classical algorithms, and he never provided any.

I don’t remember actually ever receiving this request from Scott, so I apologize for not being timely in my response. For the record, here is what I have said, in some of the posts here, several interviews, and the demo itself: (A) Orion is roughly 100x slower than state of the art algorithms running on a $1000 PC for solution of small Maximum Independent Set problems, and (B) the only way scaling can be extracted is empirically, and we can’t build big enough systems (yet) to answer scaling questions.

2. Instead of answering my and others’ technical questions, D-Wave decided to attack academic science itself. Here’s what Ed Martin, the CEO of D-Wave, told an interviewer from ITworld: “Businesses aren’t too fascinated about the details of quantum mechanics, but academics have their own axes to grind. I can assure you that our VCs look at us a lot closer than the government looks at the academics who win research grants.” The track record of companies that engage in this sort of behavior is not good.

I can’t see how you can argue that we haven’t tried to answer technical questions. I go out of my way to answer any questions that are asked. If you have any let me know, I’ll answer them if I can.

Casting Herb’s (not Ed’s-there’s something ironic there) statements as attacks on academic science is ridiculous. The above statement has been taken WAY out of context. The original question was how our investors do due diligence on the company without any peer reviewed publications, which is a fair question. Herb’s answer–which is a true statement, whether or not you like it–is that financing a company like this puts us under levels of scrutiny far beyond the norm, either for a start-up company or a big research grant. Herb’s statement isn’t an attack on academic science. It’s that the bar we have to get over to raise cash is higher than it is for other start-ups or academic groups.

I don’t know what you mean by that ominous last statement. You mean like Intel, Microsoft, IBM, GE?

3. I read article after article claiming that quantum computers can solve NP-complete problems in polynomial time. I reflected on the fact that a single one of these articles will probably reach more people than every quantum complexity paper ever written.

This is probably true, although only 50 people in the world understand what you just said. And note that I have never said that, and in fact go out of my way to state a point of view that is very similar to Scott’s.

4. It became clear, both from published interviews and from talking to Jason, that D-Wave was doing essentially nothing to correct journalists’ misconceptions about what sorts of problems are believed to be efficiently solvable by quantum computers.

While I personally find questions about efficiently solvable problems fascinating, these issues are remote from what we are actually doing here.

This is worth emphasizing, because I thought it was obvious, but it turns out alot of people don’t get this. Most of the poorly thought out comments related to what we’re trying to do have come from theoretical computer scientists, who assume that the things they hold dear are likewise treasured by everyone else. Because they worship efficiency, they have assumed that’s the objective of our projects, when I have repeatedly said it’s not.

Here is what I care about: (A) getting to the same level of accuracy as state of the art heuristics in less time and/or (B) given an allotment of time, produce a more accurate solution than state of the art heuristics, for high-value discrete optimization problems that people actually care about.

When people ask what the systems we’re building are for, I tell them that they are for solving discrete optimization problems (which they are). For someone for whom complexity theory is the fulcrum of the universe, this might be interpreted as a statement that we plan to exactly solve all discrete optimization problems, efficiently. For most people who have to deal with these types of problems in real life, the questions arising have a slightly different flavor: “Can you beat our simulated annealing approach?”, which seems like a lot more intelligent question.

12 thoughts on “More on the TR interview

  1. Pingback: Shtetl-Optimized » Blog Archive » D-Wave: Still propagating

  2. Hi Geordie,

    I’ve been thinking about the D-Wave claims for a while now, and have a few questions on the algorithmic side (for now I’ll assume you’ve managed to build a QC that maintains coherency throughout its operation, and is hence actually operating as a QC):

    1.) What algorithm are you using to obtain the speedup for discrete optimization? Is it essentially an adiabatic version of Grover’s algorithm? Or have you developed a new (unpublished) quantum algorithm that uses some hitherto undiscovered mechanism to outperform classical algorithms?

    2.) If you’re using a Grover-like algorithm to obtain your speedup (or rather, your planned speedup), you can at best hope for a quadratic speedup against an equivalent classical algorithm. If this is the case you may be able to outperform the best current heuristics-based optimization programs. However, I can’t see how a “mere” quadratic speedup can likely be economical. Let’s say you build a machine that can solve an instance of some problem in 1 second, and that same problem takes 5 seconds to solve on a 100-node cluster. How do you know that your QC is going to be a more economical solution than just buying another 400 nodes for the classical cluster? (Obviously this argument makes plenty of assumptions, for e.g. about the scaling of current state-of-the-art optimization algorithms on clusters…)

    I’d be interested to hear your thoughts on these issues.

    Regards,
    Joe

  3. Hi Joe,

    1. There are several different algorithms that we have run so far. The system itself allows the qubit biases, coupler strengths, and single-qubit barrier heights to be independently controlled. Thinking about the system like Grover search is not the right mental picture, because in (adiabatic) Grover search you’re looking for one state (which is “right”), whereas in the 2D Ising model in a magnetic field the answers are ranked worst–>best as energies decrease. Also in the adiabatic grover search the types of states that the system evolves through don’t look like the states of our system.

    One of the algorithms we’ve developed, which I particularly like going forward as the system scales up, we haven’t published yet. We probably will though.

    2. The types of problem we’re trying to solve have structure which can be exploited classically to make the scaling “more favorable”. For example, the best classical algorithms for Maximum Independent Set have worst case scaling like O(2^{0.2N}). If you were to take an MIS problem and directly try to solve it with something quadratic, you might get O(2^{0.5N}) which is worse! One of the things that has been clear for a long time is that novel ideas for integrating what a QC does into the best classical algorithms is necessary for using a machine which might only give a quadratic speed-up. If you could get a quadratic speed-up on the best known classical algorithms (which requires new algorithms), that would satisfy our objectives (faster & same accuracy or same time & more accurate).

    Another point to keep in mind on the “economical” side: these machines consume almost no power and are very easy to program. I think there are compelling benefits to using these things even if the brute strength performance was comparable to (or maybe even worse than) large clusters.

    Also I would mention that I believe that trying to predict how a real QC will actually scale is very hard. We take the attitude of “built it, measure it, see what happens, fix it, build it, measure it, see what happens”, etc.

    Hope this helps!

  4. does realy adiabatic quantum computer is worse than normal quantum computer with gates? Why so mane sciesciests from all worl want to built quantum computer, but you say that quantum computer in any case can not break RSA code? Does they are so stupid? Optimists, he?

  5. What is diferent between Shor’s faktorization algoritm and Grover’s algoritm? Simple: Grover’s algoritm give answer with better probability than Shor’s faktorization algoritm. If Shor’s algoritm gives answer with probability N, then Grovers algoritm gives answer with (log N)^1.5 better probability.

  6. I agree with Helpful Hinter, both about the broadness of the audience that understands the basic concepts of NP-complete and about your image-before-research sort of presence. Geordie, the difference between you and IBM researchers is that IBM researchers tend to come across more as researchers concerned about science rather than businessmen concerned about image. To give you the benefit of the doubt, you are from a much smaller company, and so to raise funding is much different and you are forced to be concerned with the image. Even still, you don’t seem to revert back to the scientist type when you get more involved in the science, like with all these online discussions, where it is primarily scientists, not investors whom you address. Also, to paraphrase bill nye’s mantra, you’ve made extraordinary claims which demand extraordinary evidence; while some of the evidence you have presented appears extraordinary, none of it is conclusively so.

  7. Geordie,

    RE -“For the record, here is what I have said, in some of the posts here, several interviews, and the demo itself: (A) Orion is roughly 100x slower than state of the art algorithms running on a $1000 PC for solution of small Maximum Independent Set problems, and (B) the only way scaling can be extracted is empirically, and we can’t build big enough systems (yet) to answer scaling questions.”

    I’d encourage you to get more specific in quantifying what you have both in results and error tolerance{bounds of accuracy and systems/ chip reproducibility of tested “passing” chips].

    You seem to skip out on quantifying & benchmarking [not as intellectually appealing as NP complete and related algorithmic arguments], as if lack of preparation now will not have technical ramifications later in understanding the much more difficult scaled (and noisier) needed large arrays for real markets.

    If your present crutch is only stating “empirical results later are the only test”, you hardly know now, what you need to succeed in implementation of the hoped for competitive larger product.

    Characterizing both performance metrics quantitatively, error tolerances, and reproducibility even in the present technology – by comparing 3 systems equivalence of results, will later prove a critical need.

    Wrestling does not count here, as brute force empty arguments will do little to address the harsh realities of the intended performance marketplace.

    Scott is far more correct than you care to admit to.

  8. Hi Mark,

    We do a lot of internal benchmarking of the sort you’re referring to. I agree that it is very important to do this. Having an empirically driven design cycle requires capture and analysis of as much data as we can.

    You seem to be equating what we publicly release with what we internally generate. You must understand that there’s a difference.

    By the way I don’t think Scott has been advocating having us increase the volume and analysis of our internal data. I don’t understand your last point. His main issue is that he wants either (a) public dissemination of that data or (b) that we shut up.

    While I sympathize with his point of view, I don’t think either approach makes sense for a start-up.

  9. Thank you for the response, I discovered shortly after posting that Bill Nye was a student of Carl Sagan, and that it was he in fact who popularized the phrase. I had never even heard of Marcello Truzzi, and I have never read much about Laplace, nor Hume, although I would like to (plus I’m from a Nye sort of generation, rather than Sagan), but it is fun to know the evolution of the idea now.

    Additionally, the buisnessman comment was neither meant as compliment nor insult; I simply meant to highlight what appears to me as the cause of much skepticism towards your company. You can take it however you please.

  10. Pingback: Quantum Reasoners Hold Key to Future Web « The Wandering Glitch

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s