Multi-qubit synchronization results

Synchronization of Multiple Coupled rf-SQUID Flux Qubits

A practical strategy for synchronizing the properties of compound Josephson junction rf-SQUID qubits on a multiqubit chip has been demonstrated. The impacts of small (~ 1 %) fabrication variations in qubit inductance and critical current can be minimized by the application of a custom tuned flux offset to the CJJ structure of each qubit. This strategy allows for simultaneous synchronization of the qubit persistent current and tunnel splitting over a range of external bias parameters that is relevant for the implementation of an adiabatic quantum processor.

arXiv:0903.1884v1

18 thoughts on “Multi-qubit synchronization results

  1. How does this help with the commercialization of what Dwave is doing ?
    Synchronizing qubits is good but Dwave was not using rf-SQUID right?
    It is relevant to the implementation of AQC. But how specifically ?

  2. Hi Brian, yes the qubits on Rainier are a type of compound Josephson junction rf-SQUID similar to those described in this paper.

    In order to enable our selected adiabatic quantum algorithm we need to make the time-dependent signals we apply during the annealing phase affect all the devices in a similar fashion; ideally the qubits should be nominally identical. It is not clear that you can do this as all devices have parametric variability (differences in critical currents, inductances, capacitances, etc.) arising from fab. This paper shows that you can simultaneously synchronize the response of large collections of qubits in a realistic processor. This resolves an issue (device variability) that is thought to be a significant problem in the community.

    This article will be shortly followed by another that builds on these results to show why synchronization of this sort is required in these types of approaches.

    As for its impact on commercialization, we are currently working on describing what we are building and the infrastructure that supports this and future development. This document is one of several describing key aspects of the design and its enablement. The early customers are asking for documentation of what the systems are and what they do, focusing first on the lowest level questions, such as detailed descriptions of the design and functionality of qubits, couplers, on-chip SFQ control circuitry, and readout circuitry.

    This particular document is “part 1 of 3″ describing how to enable adiabatic quantum algorithms in our processor architecture.

  3. Hello,
    I didn’t know where to ask this question. Perhaps I can get the answer here!
    –> How was the recent test results on the Rainier?!
    How can I be informed about the test results?!

  4. Nice paper!

    Presumably you only need to do this once for each chip, so could you supply each system with ‘calibration’ software which would adjust the CJJ flux biases (for that unique chip) accordingly each time the algorithm runs?

    Or is the plan to use this as a method of actually monitoring device variability as a metric for improving the fab process in the future?

  5. Hi Suz, yes you need to go through this only once, and the procedure is used for both of the uses you mentioned. Finding the offsets required is part of the calibration procedure (we call it qubit synchronization)–note there are lots of other things you need to do both before and after this during calibration such as device parameter extraction (I_c, L, M, C etc), verifying each individual device functions as expected (and there are a lot of them now), doing higher level systems test like multi-qubit gap measurements, etc. The offsets required give information about parameter spreads which can be compared with other information we get from PCMs and other measurements that are sensitive to these.

  6. Navid: The high-level summary is that the tests are going a little bit better than I had expected. A lot of things changed on this last chipset and all of the newly implemented devices seem to be functioning as designed. We are working 24/7 now on ironing out the inevitable bugs. You should start seeing some documentation of this and similar systems posted on a more regular basis, I will link to these from the blog as they arise. The paper mentioned in this post is an example of what these will look like.

  7. > Better than expected testing/bug fixing.

    What are the odds then that this is the design that gets scaled to thousands of qubits as opposed to another modified 128 qubit design ?

    ie. Is the next chip (say in three months) a lot bigger or still the same size.

    Odds ?
    25% thousands of qubits
    25% 512 qubits
    25% 256 qubits
    25% 128 qubits (modified version)

  8. Hi Brian

    It is almost certain >95% that this is going to be the design included in the first 128-qubit systems we will be selling, based on the results we are seeing from device performance on Rainier chips. That being said there will be new wafer runs with slightly modified designs (parameter variations) between now and shipping, so the actual chips we have in Vancouver now will probably not be the ones included in the sold systems. Our total focus now is on building and selling access to our technology at the 128-qubit level.

  9. Our total focus now is on building and selling access to our technology at the 128-qubit level.

    once that is reach, which sounds like some time this year, will your next focus be on

    Odds?
    15% thousands of qubits
    40% 512 qubits
    45% 256 qubits

  10. Does the 128 qubit Ranier performance answer the “is it quantum computing” question, not in abstract sense but in a performance sense ?

    ie. one or a few specific cases of specific problems which say, see this performance and solution was generated in a time which clearly shows something quantummy is happening and AQC is worthwhile beyond classical
    or is the sales pitch
    Here we are at a useful and meaningful economic level. Buy this and figure out how to use it instead of simulating on a supercomputer. It is not a racing ten speed but it is a bicycle and you can learn to ride. We are below the cost of supercomputer time ?

  11. Hi Brian, both of these, although I would re-state the objectives of your second point slightly.

    Verifying that the system behaves as predicted by quantum mechanics, and that its behavior is incompatible with a classical description, is one of our key objectives. This has already been done at the single qubit level, where we regularly perform macroscopic resonant tunneling (MRT) experiments on all of our qubits as a part of our circuit calibration procedure. I don’t know if this is completely appreciated but MRT is purely quantum mechanical and there is no possible classical explanation for this type of behavior. This is as “smoking gun” as you can get, at least at the single-qubit level. There are related tests you can run at larger qubit count that have the same objective.

    For your second point, the compelling reason to buy this technology is that for the first time humankind has built a programmable computer that cannot be simulated EVEN IN PRINCIPLE and therefore how it behaves and what it does can only be extracted via experiment. There are fundamental questions about the nature of reality that might be answered using this type of system.

    For those not interested in fundamental questions about reality (yes there are such people) having access to the system at this level will teach the user how to develop and test applications. There is a level of access for apps development that doesn’t require understanding how the machine works (our web services access, see dwavesys.com) but for some users getting into the guts of how everything works is too tantalizing an opportunity to pass up.

  12. Hi JP, first we have to get a few working at the target level and keep them working for a while. This is the first objective. As for timing, as quickly as possible… our first realistic shot is early summer.

  13. a few, so that’s at least three, how many are you working on?

    keep them working for a while, is that a few hours, days, weeks?

  14. JP: The longer we can keep the systems in operation the more confidence we have in the system’s reliability in the field. We’ll be working with interested parties during the qualification stage and make a determination with them as to what MTBF etc. is acceptable. For a rough timescale for this, it’s several months.

  15. yes the longer the better but if the time between being down and coming back up is short enough said an hour. Then you could accept a shorter MTBF said a week, if you had a second system or a third.

    don’t forget early computer had downtime but they were still use and useful. of course the hardware competition is a little tougher these days ;)

    But a wresting champ like you isn’t worry about a little competition :)

  16. There is some prize money for finding the first Mersenne Prime that has more than 1,000,000,000 decimal digits.

    See:

    Maybe you should set your machine to solving that problem.

    1. You win some money
    2. You get some free advertisement.

    Hum, why bother with 1 billion decimal digits, show off by takeing it to 10 trillion decimal digits.

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s