Yesterday I was part of a session at IDC’s HPC user forum. This session was interesting because it was about quantum computing. This was the first time the HPC user forum has had a session on quantum computing. I think there will be many more in the future.
Not only was there an entire session on the topic, but the keynote speaker at the event was Charlie Bennett, an IBM Fellow who is well-known to quantum information folks, as (among other things) he co-invented quantum cryptography. I’m going to do a separate post on what he was talking about as it was fascinating.
The session I was part of was led off by Isaac Chuang from MIT, who gave an overview of where he felt the field of quantum information science and technology was. There was a pretty comprehensive overview of a set of experimental results that had been obtained c. 2002 and c. 2013, showing an impressive advance in these results from being able to do about 10 gates on 1 qubit in 2002 to about 10s of gates on about 10 qubits in 2013. Unfortunately, he completely omitted any mention at all of any of our work, or the work of independent folks doing science on our machines. I will send him some copies of Nature.
I was next, and started the festivities by stating that basically I disagreed with everything Ike had said, and was going to give a very different perspective. I felt bad about being confrontational (obviously I still haven’t watched enough Hitchens, I am working on it). But he was in the room, so if he wanted to call me on something he could (he didn’t). A smart non-expert audience that hears completely conflicting stories is going to be confused and wonder what’s up. So I thought it would be in the interests of the audience to put a name on it.
Anyway, once the drama was out of the way, I gave my talk. Here are the slides.
After my talk, Hartmut Neven from Google talked about their D-Wave machine, and what they were doing with it. He described three use cases for machine learning, including finding extremely sparse classifiers, reducing the negative effects of improperly labeled items in supervised machine learning methods, and training and inference in deep learning. One very interesting thing he revealed was that the first of these was used to train blink detectors in the Google Glass product. This is the very first time that a quantum algorithm has been used to develop commercially deployed software.
This is extremely cool, and the beginning of what I see as a new use case opportunity for us. The scenario where you need an always on detector / classifier onboard a device with extreme power constraints is increasingly common. I just absolutely love the idea that the software that makes mobile devices work can be designed by quantum computers.
Next, Dave Wecker from Microsoft gave an overview of his group’s work on building software for programming, compiling and visualizing quantum circuits. The work they are doing is really top notch, and the presentation was great.
Finally, Jay Gambetta from IBM Yorkton Heights gave a talk about the IBM work on transmon qubits. I think the main point of interest for me about this work is how difficult / impossible it would be to scale any of it up. Lots of microwave lines!!!