Ah, but you see the thing with this cartoon is that in panel 5 you’ve put the scientific community as the ones protesting. That’s just a little inconsistent. Any crackpot could do the same thing (i.e. “Down with perpetual motion”). I’m not saying you guys are crackpots, just that the your alteration of the cartoon misfires.

Joe: It’s not inconsistent at all. Also you might notice the crackpots are the ones holding the signs. Not an accident. Substitute anti-D-Wave crackpots with anti-Celera crackpots if you’d prefer, it still works.

Well, I assume Scott is the one holding the sandwich sign, and I wouldn’t call him a crackpot. By anti-Celera crackpots do you mean the Human Genome Project? Again I wouldn’t call them crackpots. On the other hand there is still a big question mark over the status of your devices until we see some more concrete results, which was not the case with Celera.

I can certainly be convinced, but I haven’t yet seen any convincing evidence that you have built something that can in any reasonable sense be considered a quantum computer. It would be really cool if you do manage it, but I’ve already made my doubts clear. None the less, I do wish you well with it.

Joe: Your definition of crackpot may differ from mine. What do you think about the single qubit landau-zener paper with Seth? Also if you don’t see the parallels between D-Wave and Celera I suggest you read The Genome Warrior, Richard Preston, New Yorker 12 June 2000.

I find it very strange what gets called “science” these days. Looking through Aaronson’s list of publications, only 2 out of 20 from 2004 to now constitute what I would call “scientific”, because those 2 actually involved some sort of experimentation. (Those papers are “NP-complete Problems and Physical Reality” and “Improved Simulation of Stabilizer Circuits”, plus a few older ones.) Without some sort of experimentation or at least some relation to reality, papers are not science, so I would be very reluctant to call Aaronson, (and unfortunately many other professors), a scientist.

D-Wave, on the other hand, is basing its entire approach to developing its technology around experimentation and empirical results. Maybe they don’t go around sending out every word anyone there types, but I never understood the criticism from the so-called “scientific community” that they supposedly aren’t being scientific. They’ve put out much more information than most companies would, and what they’ve released usually refers to experimentation that has been or will be done. As I said above, papers on their own do not imply science, and likewise, science does not imply papers.

How to create algorithm of developing cirquit or working scheme for grover’s or schor’s algorithm? Seems dwavae claims, that they qc is adiabatic and thus don’t need such schemes knowledges simulations…
Although QC may don’t work, but we can live in physical laws, where this lwas solving our live atoms molecules interaction with exponentional speed up or with infinity speed/power of computing…
Why newton law is better and more realistic than over magical law?
I bet, that simulation of grover’s algorithm on classical computer will take twice or more time than solving the same problem on probabilitic computer (guessing). And bet that simulation of shor algorithm on classical computer would take time twice more than on probabilitic computer by guesing factoriziation 2 integers. So since there no factorization simulators of quantum algorithms on internet I have right to think, that sciencists somewhere have doen inproper conclusion about speed-up of quantum computers. Actualy probabilistic computer complexity in grover’s algorithm is O(N) and quantum computer simulation of grover’s algorithm complexity is O(N+N^0.5), sorry, I actualy mean, that such complexity is of real quantum computer! Simulation of grover algorithm on quantum computer complexity is O(N+N+N^0.5) and only O(N) for probabilistic search algorithm.

Complexity of Shor’s algorithm is O(2^n+n^2) and shor algorithm simulation (on classical computer) complexity is O(2^n+2^n+n^2) and probabilistic factoring algorithm complexity is only O(2^n+2^n) and fastest classical factoring algorithm complexity is only O(2^{n^0.3}).
So if you want dissprove this my claims about speedup no existance, then you must made simulation program of quantum computer, but prblem then apears, that you realy don’t see diference between O(N+N+N^0.5) and O(N+N^0.5), which maybe don’t exist… So you actualy don’t have experimental simulation proves, that you don’t make errors in your ‘pseudoproves’, that shor and grover’s algorithm giving speed-up over classical computer. Because encoding time of the problem can be much longer (exponentional) than you thinking! So i would say, strong, non-neglibable evidecne of speedup of quantum computation don’t exist! Exist only ‘paper’ evidence, which in our crazy world is not an evidence (because possible errors somewhere in sciencists heads, etc).
So until paper proved don’t go through computer simulation/computation on real hardware computer, then this all proves and algorithms are ‘weak proven’. But as I said to prove with ‘strong proves’ with simulation is imposible and your all ramblings about quantum computer speedups are just weak-paper proves!
P.S. Classical algorithms is possible to prove with not paper proves (they are faster or slower…).

correction: probabilistic factoring algorithm complexity is O(4^n) or O(N^2) and shor’s factoring algorithm complexity is O(4^n+n^2) and simulation of shor algorithm complexity is O(4^n+4^n+n^2). So seems there is very easy method to check if shor algorithm need realy exponentional time like for probabilistic factoring algorithm, so need just to try factorize some very big say 10-100 digits number, but as you remember from NMR IBM “quantum” experiments, time required is also O(4^n), so there would be hard to find diference between O(4^n+4^n+n^2)… Oh, yes, IBM NMR computer-simulator giving probabilistic answer with correct probability of answer 1/4^n, so need just to try encode into system factorization of big number and to perform computation… And if such encoding will not be hard then seems paper theory is right. But some errors in quantum theory/computing still be for not usefulness of quantum computer…

Neil, he does -theoretical- computer science. What exactly were you expecting? I think you need to rethink your definition of science, as it rules out all of theoretical physics. So, what? Einstein, Dirac, Schrodinger and Feynman weren’t scientists?

I’ve just read the paper. I haven’t really had a chance to work through it in detail, but it seems pretty solid. I do have one question though: In Figure 3, you seem to get good correspondence at the peaks, but away from the peaks your predictions seem to fit much less well. Any idea what is causing the effects evident at the extremes of the plot?

On the other hand, I don’t really see this paper as showing that you have a quantum computer. Your paper explicitly dealt with only single qubits. For any problem on a single qubit there is no local minima, and it is really the effect of local minima that distinguishes adiabatic QC from a classical computation.

“Einstein, Dirac, Schrodinger and Feynman werenât scientists?”
For me, enshtein realy is not sciencist… Schrodinger also, because his schrodinger equation can be replaced with any over less or more apropriate equation… Dirac also somthing go into fantasies about relativity and quantum mechanic connection and “invent” many wrong-unprovable things… Feyman isn’t sciencist either, because go deep into particles physics and “invent” some stupid theories about some strange particles, which was observed one from trilions – errors probability of wrong experiment even bigger…

BTW, about grover algorithm, there also is possible to check if for classical computer there would be problem to encode (to preper scheme) for finding/solving some problems… So I would recomendate to assume for sciencists that they have (say?) gate model of quantum computer, which working with decoherence and to try to solve problem, of course due to decoherence they would get almost 100% wrong answer, but there would be very interesting how much time they would spend on prepering grovers quantum algorithm for work… Somthing similar I think must be with D-wave quantum computer and thus I waiting for some big many qubits dwave quantum computer, which according to theory should give non-neglibable quadratic and very visible speed-up. So if ‘paper theory’ of quantum computing is wrong and I am right then Geordie will spend more time for computation than probabilistic computer, even if asume, that after very short time (Say few ns or ms) he will measure answer from adiabatic QC after same time like on 16 qubits system. But more interesting would be for gate model, because gate mode quantum computer solving in O(N^0.5) time and thus even after incorrect answer after this time gate model quantum computer should give answer quadraticaly faster (for visibly big N…) than probabilistic or classical computer for which need time O(N). But as I see nobody doing such experiments and maybe thinking it is not important or maybe I don’t know and they have “simulators” on computer, but simulators problem is that they doing everything quantum mechanicaly and thus hard to see diference between O(N) for probabilistic and O(N+N) for quantum simulation on program, where one N is for possible encoding and over for simulation of quantum mechanic. So need to try like you doing real experiment… with many qubits…

Feyman isnât sciencist either, because go deep into particles physics and âinventâ some stupid theories about some strange particles, which was observed one from trilions – errors probability of wrong experiment even biggerâŚ

QED is one of the most accurate theory in physics, and has been verified in some cases up to an accuracy of 10^-11. But I suspect you aren’t going to be convinced are you?

“QED is one of the most accurate theory in physics, and has been verified in some cases up to an accuracy of 10^-11.”
Interesting, from where such accuracy, maybe from quarks?

Joe: The structure at higher biases in figure 3 comes from tunneling from the lowest energy state in the higher well to the 1st excited state in the lower well. As you tip the rf-squid potential (cosine + quadratic) farther and farther you can see resonant tunneling from the lowest state in one well to the 2nd, 3rd, 4th, etc. excited state on the other side. Note that the MRT peaks corresponding to these transitions can also be characterized in the same way as the lowest –> lowest transitions studied in the paper. I believe that additional information about the environment can be extracted by looking at resonant tunneling from the lowest energy state in one well to states very high up on the other side.

In regards to the relation to AQC, I think what these results show are that (a) at the single qubit level, it is possible to have a good degree of certainty that the qubits are behaving as quantum mechanical two-level systems coupled to an environment, and we can characterize that environment quite precisely, and (b) that the simplest optimization problem of the sort the system is designed to solve, namely finding the value of s_1 that globally minimizes E(s_1)=h_1s_1 for user-supplied h_1, can be solved using an adiabatic quantum algorithm in the spirit of the original AQC papers. One of the later figures shows that the minimum gap in the adiabatic evolution can be made much larger than T and W thereby guaranteeing that you can interpolate adiabatically from |0>+|1> to the ground state of E(s_1) and get the correct answer 100% of the time, as long as h_1 >W, T. When this paper was being written I thought that this latter point wasn’t given enough emphasis. I guess it depends on how you look at it… I’m most interested in computation whereas this was more of a condensed matter paper.

I am not in the least surprised to hear that you can model the system as coupled to a gaussian environment. I’m not really sure how this could be taken as an arguement in favour of you having a good qubit, though, as I would expect a similar noise model to work for almost any system.

I don’t really see this paper as saying anything about the computational power of the system. I realise you can do a single qubit optimisation, but as I have mentioned before, this is trivial as there are no local minima. Simply switching on the target Hamiltonian and cooling would work perfectly, without the need for the adiabatic approach. Finding the ground state of a Hamiltonian is not NP-complete if we impose the restriction that the system have no local minima of energy.

I’m not saying that this isn’t a nice condensed matter paper, just that it isn’t evidence that your system is capable of adiabatic quantum computation. I don’t think you even claim this in the paper. Since you raised this in relation to my objections to your cartoon, I would have assumed that it had a bearing on the computational power of your system.

So if Dwave comes out with a 1K qubit processor that does computing that a classical computer couldnât do, does that prove anything? Or would you still need to see 2 or more qubits entangled? Not that that wouldn’t be nice.

Well, what do you mean “does computing that a classical computer couldn’t do”? If you mean solving a QMA-complete problem, then yes, I would be reasonably certain they had a large scale quantum computer.

Geordie: I don’t know if you want to be comparing D-wave to Celera Genomics… their business model was largely flawed and they were ultimately overtaken (most significantly on the Human Genome Project) by government initiatives. I wish you more success.

â˘ Market cap February 25th 2000: $14 billion
â˘ The Sequence of the Human Genome, Venter et.al. Science 16 February 2001
â˘ The Diploid Genome Sequence of an Individual Human, Levy et.al. PLoS Biology Sept 4 2007 (AKA Craig Venterâs genome)

Geordie: I’m not saying that Celera didn’t do incredible work, or that Venter isn’t still involved in incredible work at his institute. Rest assured, I appreciate his role in solving the H. sapien genome. I’m just making the point that this is a case where the government stepped in with a vast amount of funding to torpedo an industry (using a large-scale implementation of an inferior method). Venter should have seen this coming, and not insisted on sticking with a business of selling genomic information.

so when are you going to piss off IBMđ

Does that mean that whenever I piss off my boss, it means I am doing science?

:p

In the immortal words of Professor Hubert J. Farnsworth, “I’m sciencing as fast as I can!”

Ah, but you see the thing with this cartoon is that in panel 5 you’ve put the scientific community as the ones protesting. That’s just a little inconsistent. Any crackpot could do the same thing (i.e. “Down with perpetual motion”). I’m not saying you guys are crackpots, just that the your alteration of the cartoon misfires.

Oops. I didn’t mean that we are skeptical of AQC, but rather DWave’s claims about achieving it.

Joe: It’s not inconsistent at all. Also you might notice the crackpots are the ones holding the signs. Not an accident. Substitute anti-D-Wave crackpots with anti-Celera crackpots if you’d prefer, it still works.

Well, I assume Scott is the one holding the sandwich sign, and I wouldn’t call him a crackpot. By anti-Celera crackpots do you mean the Human Genome Project? Again I wouldn’t call them crackpots. On the other hand there is still a big question mark over the status of your devices until we see some more concrete results, which was not the case with Celera.

I can certainly be convinced, but I haven’t yet seen any convincing evidence that you have built something that can in any reasonable sense be considered a quantum computer. It would be really cool if you do manage it, but I’ve already made my doubts clear. None the less, I do wish you well with it.

Joe: Your definition of crackpot may differ from mine. What do you think about the single qubit landau-zener paper with Seth? Also if you don’t see the parallels between D-Wave and Celera I suggest you read The Genome Warrior, Richard Preston, New Yorker 12 June 2000.

I find it very strange what gets called “science” these days. Looking through Aaronson’s list of publications, only 2 out of 20 from 2004 to now constitute what I would call “scientific”, because those 2 actually involved some sort of experimentation. (Those papers are “NP-complete Problems and Physical Reality” and “Improved Simulation of Stabilizer Circuits”, plus a few older ones.) Without some sort of experimentation or at least some relation to reality, papers are not science, so I would be very reluctant to call Aaronson, (and unfortunately many other professors), a scientist.

D-Wave, on the other hand, is basing its entire approach to developing its technology around experimentation and empirical results. Maybe they don’t go around sending out every word anyone there types, but I never understood the criticism from the so-called “scientific community” that they supposedly aren’t being scientific. They’ve put out much more information than most companies would, and what they’ve released usually refers to experimentation that has been or will be done. As I said above, papers on their own do not imply science, and likewise, science does not imply papers.

How to create algorithm of developing cirquit or working scheme for grover’s or schor’s algorithm? Seems dwavae claims, that they qc is adiabatic and thus don’t need such schemes knowledges simulations…

Although QC may don’t work, but we can live in physical laws, where this lwas solving our live atoms molecules interaction with exponentional speed up or with infinity speed/power of computing…

Why newton law is better and more realistic than over magical law?

I bet, that simulation of grover’s algorithm on classical computer will take twice or more time than solving the same problem on probabilitic computer (guessing). And bet that simulation of shor algorithm on classical computer would take time twice more than on probabilitic computer by guesing factoriziation 2 integers. So since there no factorization simulators of quantum algorithms on internet I have right to think, that sciencists somewhere have doen inproper conclusion about speed-up of quantum computers. Actualy probabilistic computer complexity in grover’s algorithm is O(N) and quantum computer simulation of grover’s algorithm complexity is O(N+N^0.5), sorry, I actualy mean, that such complexity is of real quantum computer! Simulation of grover algorithm on quantum computer complexity is O(N+N+N^0.5) and only O(N) for probabilistic search algorithm.

Complexity of Shor’s algorithm is O(2^n+n^2) and shor algorithm simulation (on classical computer) complexity is O(2^n+2^n+n^2) and probabilistic factoring algorithm complexity is only O(2^n+2^n) and fastest classical factoring algorithm complexity is only O(2^{n^0.3}).

So if you want dissprove this my claims about speedup no existance, then you must made simulation program of quantum computer, but prblem then apears, that you realy don’t see diference between O(N+N+N^0.5) and O(N+N^0.5), which maybe don’t exist… So you actualy don’t have experimental simulation proves, that you don’t make errors in your ‘pseudoproves’, that shor and grover’s algorithm giving speed-up over classical computer. Because encoding time of the problem can be much longer (exponentional) than you thinking! So i would say, strong, non-neglibable evidecne of speedup of quantum computation don’t exist! Exist only ‘paper’ evidence, which in our crazy world is not an evidence (because possible errors somewhere in sciencists heads, etc).

So until paper proved don’t go through computer simulation/computation on real hardware computer, then this all proves and algorithms are ‘weak proven’. But as I said to prove with ‘strong proves’ with simulation is imposible and your all ramblings about quantum computer speedups are just weak-paper proves!

P.S. Classical algorithms is possible to prove with not paper proves (they are faster or slower…).

correction: probabilistic factoring algorithm complexity is O(4^n) or O(N^2) and shor’s factoring algorithm complexity is O(4^n+n^2) and simulation of shor algorithm complexity is O(4^n+4^n+n^2). So seems there is very easy method to check if shor algorithm need realy exponentional time like for probabilistic factoring algorithm, so need just to try factorize some very big say 10-100 digits number, but as you remember from NMR IBM “quantum” experiments, time required is also O(4^n), so there would be hard to find diference between O(4^n+4^n+n^2)… Oh, yes, IBM NMR computer-simulator giving probabilistic answer with correct probability of answer 1/4^n, so need just to try encode into system factorization of big number and to perform computation… And if such encoding will not be hard then seems paper theory is right. But some errors in quantum theory/computing still be for not usefulness of quantum computer…

Neil, he does -theoretical- computer science. What exactly were you expecting? I think you need to rethink your definition of science, as it rules out all of theoretical physics. So, what? Einstein, Dirac, Schrodinger and Feynman weren’t scientists?

Hi Geordie,

I’ve just read the paper. I haven’t really had a chance to work through it in detail, but it seems pretty solid. I do have one question though: In Figure 3, you seem to get good correspondence at the peaks, but away from the peaks your predictions seem to fit much less well. Any idea what is causing the effects evident at the extremes of the plot?

On the other hand, I don’t really see this paper as showing that you have a quantum computer. Your paper explicitly dealt with only single qubits. For any problem on a single qubit there is no local minima, and it is really the effect of local minima that distinguishes adiabatic QC from a classical computation.

“Einstein, Dirac, Schrodinger and Feynman werenât scientists?”

For me, enshtein realy is not sciencist… Schrodinger also, because his schrodinger equation can be replaced with any over less or more apropriate equation… Dirac also somthing go into fantasies about relativity and quantum mechanic connection and “invent” many wrong-unprovable things… Feyman isn’t sciencist either, because go deep into particles physics and “invent” some stupid theories about some strange particles, which was observed one from trilions – errors probability of wrong experiment even bigger…

BTW, about grover algorithm, there also is possible to check if for classical computer there would be problem to encode (to preper scheme) for finding/solving some problems… So I would recomendate to assume for sciencists that they have (say?) gate model of quantum computer, which working with decoherence and to try to solve problem, of course due to decoherence they would get almost 100% wrong answer, but there would be very interesting how much time they would spend on prepering grovers quantum algorithm for work… Somthing similar I think must be with D-wave quantum computer and thus I waiting for some big many qubits dwave quantum computer, which according to theory should give non-neglibable quadratic and very visible speed-up. So if ‘paper theory’ of quantum computing is wrong and I am right then Geordie will spend more time for computation than probabilistic computer, even if asume, that after very short time (Say few ns or ms) he will measure answer from adiabatic QC after same time like on 16 qubits system. But more interesting would be for gate model, because gate mode quantum computer solving in O(N^0.5) time and thus even after incorrect answer after this time gate model quantum computer should give answer quadraticaly faster (for visibly big N…) than probabilistic or classical computer for which need time O(N). But as I see nobody doing such experiments and maybe thinking it is not important or maybe I don’t know and they have “simulators” on computer, but simulators problem is that they doing everything quantum mechanicaly and thus hard to see diference between O(N) for probabilistic and O(N+N) for quantum simulation on program, where one N is for possible encoding and over for simulation of quantum mechanic. So need to try like you doing real experiment… with many qubits…

QED is one of the most accurate theory in physics, and has been verified in some cases up to an accuracy of 10^-11. But I suspect you aren’t going to be convinced are you?

“QED is one of the most accurate theory in physics, and has been verified in some cases up to an accuracy of 10^-11.”

Interesting, from where such accuracy, maybe from quarks?

Wikipedia has a nice page on such tests here.

Joe: The structure at higher biases in figure 3 comes from tunneling from the lowest energy state in the higher well to the 1st excited state in the lower well. As you tip the rf-squid potential (cosine + quadratic) farther and farther you can see resonant tunneling from the lowest state in one well to the 2nd, 3rd, 4th, etc. excited state on the other side. Note that the MRT peaks corresponding to these transitions can also be characterized in the same way as the lowest –> lowest transitions studied in the paper. I believe that additional information about the environment can be extracted by looking at resonant tunneling from the lowest energy state in one well to states very high up on the other side.

In regards to the relation to AQC, I think what these results show are that (a) at the single qubit level, it is possible to have a good degree of certainty that the qubits are behaving as quantum mechanical two-level systems coupled to an environment, and we can characterize that environment quite precisely, and (b) that the simplest optimization problem of the sort the system is designed to solve, namely finding the value of s_1 that globally minimizes E(s_1)=h_1s_1 for user-supplied h_1, can be solved using an adiabatic quantum algorithm in the spirit of the original AQC papers. One of the later figures shows that the minimum gap in the adiabatic evolution can be made much larger than T and W thereby guaranteeing that you can interpolate adiabatically from |0>+|1> to the ground state of E(s_1) and get the correct answer 100% of the time, as long as h_1 >W, T. When this paper was being written I thought that this latter point wasn’t given enough emphasis. I guess it depends on how you look at it… I’m most interested in computation whereas this was more of a condensed matter paper.

Geordie: Thanks for clarifying the graph for me.

I am not in the least surprised to hear that you can model the system as coupled to a gaussian environment. I’m not really sure how this could be taken as an arguement in favour of you having a good qubit, though, as I would expect a similar noise model to work for almost any system.

I don’t really see this paper as saying anything about the computational power of the system. I realise you can do a single qubit optimisation, but as I have mentioned before, this is trivial as there are no local minima. Simply switching on the target Hamiltonian and cooling would work perfectly, without the need for the adiabatic approach. Finding the ground state of a Hamiltonian is not NP-complete if we impose the restriction that the system have no local minima of energy.

I’m not saying that this isn’t a nice condensed matter paper, just that it isn’t evidence that your system is capable of adiabatic quantum computation. I don’t think you even claim this in the paper. Since you raised this in relation to my objections to your cartoon, I would have assumed that it had a bearing on the computational power of your system.

So if Dwave comes out with a 1K qubit processor that does computing that a classical computer couldnât do, does that prove anything? Or would you still need to see 2 or more qubits entangled? Not that that wouldn’t be nice.

Well, what do you mean “does computing that a classical computer couldn’t do”? If you mean solving a QMA-complete problem, then yes, I would be reasonably certain they had a large scale quantum computer.

Geordie: I don’t know if you want to be comparing D-wave to Celera Genomics… their business model was largely flawed and they were ultimately overtaken (most significantly on the Human Genome Project) by government initiatives. I wish you more success.

Kaori:

â˘ Market cap February 25th 2000: $14 billion

â˘ The Sequence of the Human Genome, Venter et.al. Science 16 February 2001

â˘ The Diploid Genome Sequence of an Individual Human, Levy et.al. PLoS Biology Sept 4 2007 (AKA Craig Venterâs genome)

I’ll take the comparison any day.

Geordie: I’m not saying that Celera didn’t do incredible work, or that Venter isn’t still involved in incredible work at his institute. Rest assured, I appreciate his role in solving the H. sapien genome. I’m just making the point that this is a case where the government stepped in with a vast amount of funding to torpedo an industry (using a large-scale implementation of an inferior method). Venter should have seen this coming, and not insisted on sticking with a business of selling genomic information.

I meant to say “…business model of selling genomic information.”

Pingback: Recent Links Tagged With "environment" - JabberTags