Some Washington pictures

Here are some pictures of the most recent Washington generation chips. These are C16 chips — 16*16*8 = 2,048 physical qubits. Enjoy!

IMG_0602  IMG_0611


34 thoughts on “Some Washington pictures

  1. Previously reports were of a Washington chip with 1152 qubits. and that the 1000 qubit class chip would be released early in 2015. Are these the same class of chips ? Are they the same but one includes error correction qubits and one does not ? Can anything be said on the commercial release of the 1152 qubit or the 2048 qubit system ?

    • Hi Brian! A subset of the qubits physically available on the chip are currently under test, corresponding to a 12×12 unit cell subgraph. However all the qubits can be activated, it’s just easier to use a smaller region for calibration and test. The D-Wave Three product will contain Washington-style chips. The first ones will ship in early 2015.

  2. Whoa! That looks like the chip from SkyNet! Very cool😉
    By the way, what’s the square border around it, shielding? In the previous processor you can see the digital and quantum parts separately, but not here anymore.

    • Short answer: for some problems, the answer is probably yes [see previous post containing Colin Williams’ presentation].

      However this is a different type of computer than its competitors, which makes direct comparison difficult. The main issue is that conventional processors are designed to be able to run any problem solving process (AKA algorithm) whereas these chips are hardcoded to run a specific class of algorithm (a physics-driven combination of quantum and thermal annealing that produces samples from a probability distribution. This might sound esoteric but it’s at the heart of probabilistic machine learning which is a Big Deal).

      The flexibility in being able to run any algorithm on conventional hardware always leaves loopholes for the conventional gear and prevents any categorical statements about speedups from being made. This situation is different than the world is used to, where hardware can be compared directly to hardware. In our case, the full compute systems including problem type, algorithms, parameters, hardware, etc. need to be compared, which makes it easy to build a system that doesn’t work well unless you pay close attention to all those other things.

      This is the reason why there appear to be so many conflicting reports on the capabilities of the computers we make. The processor is pretty awesome, but if you use it wrong you can easily generate situations where the bad choices made in the other aspects of the systems mask its inherent awesomeness.

  3. Pingback: D-Wave продемонстрировала образцы 2048-кубитного квантового чипа | Последние новости

    • Do you mean problems in NP? Yes some of the problems you might want to solve with hardware are decision problems. However our focus right now is on using samples from the hardware as components of probabilistic machine learning algorithms, such as Boltzmann machines. Generating samples from a boltzmann distribution over a large neural net in reasonable time is a very hard problem.

  4. Pingback: Some Washington pictures | The Zetetic Forum

  5. congratulations have been following some of these developments closely😎
    the scientific papers on the subject are interesting but quite difficult to follow and also do not seem to isolate a key element. it would be very helpful if someone wrote up an explanation or operator-like manual for DWave machines aimed at practitioners, instead of scientists, and described the basics of problem setup/ mapping and quantum annealing versus conventional (simulated?) annealing. there does not seem to be a basic description of this anywhere at present. also, it appears at this point even a contrived problem where it is demonstrated that quantum annealing does well or better than simulated annealing would be quite a breakthrough.

    • HI vznvzn, there are lots of problems where the processors do better than anything [see Colin Williams’ talk in the previous post for an example]. Beating simulated annealing is very easy — just choose problems with lots of local minima (I believe the relevant data for that type of thing is also in Colin’s presentation). That level of accomplishment is a breakthrough but happened a couple of years ago. The hard problem now is mapping the capability of the processors effectively to problems that matter in the real world. That’s the core of the deep learning work we’ve been doing. Using the technology properly is hard, but so was developing compilers in the early days of silicon.

  6. Pingback: D-Wave продемонстрировала образцы 2048-кубитного квантового чипа | Новости

    • There are many more control signals required than that. I believe something like an order of magnitude more. This type of architecture includes a scalable control system where a small number of lines (less than 200) is used to program signals required on the chip into non-volatile memory elements.

    • Thanks, Neil!

      And, to answer your question, “under the sky plane, of course!”.🙂

      The thin indentations you can see as a grid are actually qubits, and couplers between them, and all of the control is hidden underground, much like you do not notice electricity and water coming in, and sewage being taken out if your home — but there is a lot there!

      Still playing music?

      Cheers, Paul B.

  7. When will “Washington” be released? Do you plan to call it D-Wave-3? Have any independent tests been done with it that will quiet the critics who say you really aren’t doing quantum computing?

    • Hi Michael! We’re planning on getting the new product generation online by Q2 / 2015. It is looking terrific! I am very proud of the work the team has done going from Vesuvius to here. They continue to produce miracles.

      Re. critics, you can’t win with them — here is my take:

  8. Hey Geordie,

    Amazing work!

    Do you think these processors will connect/sync well with robotic type circuits? Stuff like arms used in surgery, automobile manufacturing, or anything that uses classical circuits to function?

      • Interesting.

        So do you think we need these chips for the feasibility of general AI (or better machine learning) or is it something AI scientists still need to figure out?

        Is it accurate to say these chips are what the hadron collier is to scientists, in that discoveries are depending in new technologies?

        I saw this Elon interview ( question that reminded me of D-wave and the current machine learning research.

Leave a Reply

Please log in using one of these methods to post your comment: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s