Can a game teach kids quantum mechanics?

Geordie:

This is very cool!

Originally posted on Quantum Frontiers:

Five months ago, I received an email and then a phone call from Google’s Creative Lab Executive Producer, Lorraine Yurshansky. Lo, as she prefers to be called, is not your average thirty year-old. She has produced award-winning short films like Peter at the End (starring Napoleon Dynamite, aka Jon Heder), launched the wildly popular Maker Camp on Google+ and had time to run a couple of New York marathons as a warm-up to all of that. So why was she interested in talking to a quantum physicist?

You may remember reading about Google’s recent collaboration with NASA and D-Wave, on using NASA’s supercomputing facilities along with a D-Wave Two machine to solve optimization problems relevant to both Google (Glass, for example) and NASA (analysis of massive data sets). It was natural for Google, then, to want to promote this new collaboration through a short video about quantum computers…

View original 923 more words

Steve Cakebread joins D-Wave

From the press release:

Cakebread will further manage and help sustain balance in D-Wave’s corporate expansion trajectory as it pursues its mission to solve the intractable problems of industry, security, aerospace and medicine. Before joining D-Wave, Cakebread served as chief financial officer of Pandora Media, Inc., a provider of personalized Internet radio and music discovery services. He was president and chief strategy officer of salesforce.com, a customer relationship management service provider. Other roles with salesforce.com include executive vice president and chief financial officer. Before that, Cakebread served as senior vice president and chief financial officer at Autodesk and Silicon Graphics World Trade. Cakebread holds a B.S. in business from the University of California at Berkeley and an M.B.A. from Indiana University.

“It’s a privilege to be part of the next big leap in computing,” stated Cakebread. “Classical processors have a new partner and there’s no limit to how far humanity can advance with the power of quantum computers. Clearly, D-Wave is leading this next generation of computing.”

Tunneling spectroscopy using a probe qubit

New paper published in Phys Rev B last week showing a very cool new technique for examining quantum effects in big quantum systems. Here’s the arxiv link.

Here is the abstract:

We describe a quantum tunneling spectroscopy technique that requires only low bandwidth control. The method involves coupling a probe qubit to the system under study to create a localized probe state. The energy of the probe state is then scanned with respect to the unperturbed energy levels of the probed system. Incoherent tunneling transitions that flip the state of the probe qubit occur when the energy bias of the probe is close to an eigenenergy of the probed system. Monitoring these transitions allows the reconstruction of the probed system eigenspectrum. We demonstrate this method on an rf SQUID flux qubit.

The Developer Portal

quantum computer programming - developer portal Keen-eyed readers may have noticed a new section on the D-Wave website entitled ‘developer portal’. Currently the devPortal is being tested within D-Wave, however we are hoping to open it up to many developers in a staged way within the next year.

We’ve been getting a fair amount of interest from developers around the world already, and we’re anxious to open up the portal so that everyone can have access to the tools needed to start programming quantum computers! However given that this way of programming is so new we are also cautious about carefully testing everything before doing so. In short, it is coming, but you will have to wait just a little longer to get access!

A few tutorials are already available for everyone on the portal. These are intended to give a simple background to programming the quantum systems in advance of the tools coming online. New tutorials will be added to this list over time. If you’d like to have a look you can find them here: DEVELOPER TUTORIALS

In the future we hope that we will be able to grow the community to include competitions and prizes, programming challenges, and large open source projects for people who are itching to make a contribution to the fun world of quantum computer programming.

TeleXLR8 presentation this weekend: ‘Hack the Multiverse:The Talk.’

quantum computing talk in TeleXLR8

A TeleXLR8 montage. Photo credit: TeleXLR8

Hack the Multiverse blogger Suzanne Gildert will be giving a talk about quantum computer programming this weekend. The talk will take place in TeleXLR8, now powered by a new opensource version of the original engine. TeleXLR8 is a virtual environment similar to Second Life where you can watch talks and have discussions. It’s awesome. So grab your Fritos and Mountain Dew and gather round the laptop screen for a fun session of learning about this exciting field! The talk will cover an introduction to quantum computing and the technology of building quantum computers, then move into a tutorial discussing Energy Programming: A new way of programming unlike anything else in existence.

There’s also a special treat for those attending the talk: A chance to navigate one’s avatar around a lifesize virtual copy of the D-Wave One quantum computer.

If you would like to attend the talk please contact Giulio Prisco, there might be some slots left but they are rapidly filling up so be quick! Here is some more information about the event and how to apply for a place: Information about ‘Hack the Multiverse’ talk on KurzweilAI’

Don’t worry if you can’t get a seat, the videos will also be posted online afterwards so you can watch at your leisure!

How cold?

quantum computingHello Many-Worlds!

As some of my fellow bloggers focus on how to program applications on the D-Wave One™, I’m going to delve into one important subsystem that is essential to keep the Rainier processor running, cooling!

Why does it need to be cold? Noise.

There are many reasons that the chip and its supporting infrastructure needs to be cold.  However, the first and foremost reason is noise.  For our processors to work, they need to be in as quiet of an environment as possible.  In order for the problem to be set as accurately as possible, we need to know that the signal each qubit or coupler sees during computation is the one generated from our electronics.  Filtering on the signal lines going to the chip reduces the majority of the noise to the chip.  However, there is also noise associated with temperature.  The lower the temperature, the lower the noise.

… the lowest temperature in nature and in the universe is 2.73K.  This background temperature exists everywhere in the universe because of the photon energy which is still being radiated from the ‘big bang’.  If we compare low-temperature physics to other branches of physics we realize that it is actually one of the very few branches of science where mankind has surpassed nature, an achievement which has not yet proved possible, for example, in high-pressure physics, high energy physics or vacuum physics.

-Frank Pobell, Matter and Methods at Low Temperatures

Of the “knobs” in nature, temperature is one we can adjust to the very limit of what nature permits.  This ability to manipulate temperature enables many different technologies .  The Large Hadron Collider is able to smash particles together at near the speed of light generating temperatures 100,000 times the temperature of the sun because of its 27km accelerator ring cooled to 1.9K.  Superconducting cell phone receivers are able to better pick you cell signal out of noisy urban areas  because they are cooled to below 77K .  MRI technology is an incredibly useful diagnostic tool for doctors and would not be nearly as sensitive if their magnets were not cooled to 4K.  Similar to these other technologies, at D-Wave we are able to operate processors that use some of the most fundamental behaviors of nature because of our systems ability to cool to a fraction of a degree above absolute zero.

These go to 11mK!!

Temperature scaleAt D-Wave, dilution refrigerators (DR) are used to cool our hardware.  The coldest (base) temperature of these DRs is less than 0.02K!  That is multiple kilograms of metal per system cooled to 100x colder than you can find anywhere in nature outside of a lab!  Because of our rapid design cycle, there is usually at least one fridge taking the trip between 300K and base every week.  At these temperatures, we are able to adjust the settings of each qubit and coupler to the precisely the values corresponding to the problem we want to solve.

In the coming weeks, I will be describing the purpose and workings of each of the main components of the system that cools our processors from room temperature all the way down to below 20mK. These posts should be at a level that a person with minimal exposure to physics should be able to follow.

See you next time!

Bonus: To see just how cold the D-Wave One™ gets in comparison to the temperatures we live in, click the image on the left (works best in Firefox).

Do not disturb my quantum circles

Quantum computingWhy do some people say a quantum computer has 4 bits, and others say they have hundreds? Why is it more complicated than a regular computer? As a scientist in this field myself, I can’t help but feel bad about the confusion surrounding this topic. People are really just curious about the potential of this cool sounding technology, and although news reports continue to promise the appearance of these machines very soon, the exact definition of a quantum computer still seems to be elusive.

I’m going to try and explain one of the main problems in the field of quantum computing today through the use of a visual analogy. The following explanation gets to the heart of a real disagreement, even between experts in the field. There are heated debates in the scientific community over this problem (yes, scientists get into dogfights too). So be aware that you’re getting some juicy inside knowledge here about what really stumps even expert scientists! Of course we don’t often want to shout about the fact that we get stumped by things… but it is true :) I should mention that this explanation is a very conceptual and abstract way to look at quantum computing so there won’t be any implementations or specific algorithms discussed. It compares the ‘conventional’ approach of quantum computing with a new approach, called ‘Natural Quantum Computing’ (NQC). The quantum computers built by D-Wave are of the NQC type.

.
Building blocks – in theory and in practice

Let’s begin with some background. When engineers and scientists build a new technology, there are two factors that come into play. There’s the theoretical model of the technology or system, and then there’s the practical implementation of that technology or system. These two companions are like yin and yang, both are necessary for the whole to function, yet they are forever competing with one another, two sides of the same coin.
In my analogy, my yin and yang are going to be represented by circles:
.

quantum computing

.
Which circle is better? You might argue that the perfect yin circle is mathematically more beautiful, but if it is never found in nature, then is it really of any use to us? The more natural yang circle has its own bumpy beauty in that it really represents the true nature of what we see around us. You just don’t see perfect circles in nature.

Now, what does this have to do with computing? Well, when we want to build a system, we build it out of bumpy circles (we have to, as they are the only things we can find around us). But we understand it – and predict how it will behave mathematically – by approximating it with perfect circles, because those are easier for us to calculate with. So, let’s pretend that our circles are now building blocks:
.

quantum computing

.
In most cases, this works pretty well. Our mathematical models now predict the behaviour of the circles pretty much the same way as they behave in nature (if you’d like to think about this system ‘behaving’ dynamically, imagine removing one of the circles on the lower layer and trying to predict where they would all end up). Our model of the circles would probably get it roughly right.

When we try to understand classical digital circuits, the same thing happens. We build a ‘model’ of a classical digital computer, like the picture above, and that model behaves in a similar way to its real-world counterpart.

.
Building a ‘conventional’ Quantum Computer using circles

However, (and here begins the juicy bit) – quantum computers don’t work like that. At least, not the ones that people have been trying to build up to now. I’m going to use the phrase ‘conventional QC’ to describe those, because there IS a different way of doing things which I’ll describe later. It turns out that in a conventional QC that slight difference between the yin and yang descriptions really does make ALL the difference.

What would a conventional quantum computer look like if we used the circle analogy to describe it? Well our mathematical yin model of it would probably look something like this:

.
quantum computing
.

What happens when you try and build this model using real, bumpy, yang circles? Well, you can try…

.
quantum computing
.

Oops! What happened? It didn’t work like we thought it would. The imperfections in the circles just made our delicate system come crashing down. Try again? The same thing happens. You can never get those circles to behave like their yin counterparts. The bumpiness just makes it impossible.

This is exactly what happens when we try to build a quantum computer. Our theories tell us that if we can just get 50, or 100, or maybe 500 of those darned circles to balance on top of one another then our technology will be able to fulfil our every dream! But we just can’t do it in real life – the thing keeps falling over! That is exactly what is meant by a system undergoing ‘decoherence’ if you are familiar with quantum computing parlance.

Well, something has to be done about this. We still want to be able to build a quantum computing system, right?

Yes – but there is an important problem to be addressed here. Who is at fault? Something didn’t add up, and blame must be assigned! Most people blame the yang engineers of the system for using bumpy circles: “Your circles are not perfect enough. Your system falls over. You’re not trying hard enough.”

So the diligent engineers try to find ways to make their circles more and more smooth. Little by little, they are able to balance one or two more on top of each other, for a little longer. But the circles can never be exactly perfect – so although that mathematically stable equilibrium of hundreds of balanced circles can be easily be modelled in theory, it is questionable as to whether it can it ever be built.

The interesting thing is that the question is almost never asked the other way round:
Are the yin circles perhaps too perfect?

In other words: Why do we create models of things that we then can’t ever practically make for real? All approximate models break down at some point, but the difference is usually small enough to still be able to inform our practical building of things. But in this case the model seems useless for anything we try to build that is more complex than a few building blocks!

.
Natural Quantum Computation

There is another way to build quantum computers that DO behave like their models. There is a type of quantum computing known as Natural Quantum Computing (NQC). This is a way of using quantum systems that we CAN build, in a way that is practical, and doesn’t go against what nature intended to happen to those circles.

A natural quantum computer system would be represented more like this:

.
quantum computing
.

Now the bumpiness doesn’t matter as much, because once again our model behaves a lot more like the real thing we are building. We have attacked the problem from both sides here – we have taken a reality check on our expectations of building with bumpy circles, and we have designed a mathematical model that respects that. We can still build up our quantum computer to be bigger and bigger using this method, and to do more and more, but it now captures the essence of how nature’s elements really do behave. Now this should answer the question about why different people have built quantum computers with different numbers of ‘qubits’ (quantum bits) – try and balance them using the conventional approach, and it depends how well you smooth out your bumpy circles. You might be able to get four, five, or even seven wobbling precariously on top of one another. People are working all over the world on trying to improve what nature has given to us, to get those circles smoother and smoother.

But using Natural QC and you can easily build systems with hundreds of bits. The system works differently, and it doesn’t fall over so much!

.
So why isn’t everyone taking this Natural QC approach?

Some people feel that it is better to just keep trying to make the circles ever smoother, because people are very familiar with the theoretical ‘yin model’ and have worked with this system mathematically for many years, developing algorithms to allow such a system (once built) to solve problems very quickly. But natural quantum computing is also very good at solving specific types of problems, like those in artificial intelligence and optimization. I myself feel that these NQC-suited problems are more interesting. Given that we also know that it is much easier to build these natural quantum computers, for me it’s really a no-brainer! But some people still like the idea of building the precariously-balancing conventional type of quantum computer. And it’s a matter of opinion about whether or not you believe that it is possible to make those circles smooth enough to get them to balance. But that’s half the fun in science – there are some questions that just haven’t been answered yet!

Even so, I think that we are sometimes a little too biased towards the yin theoretical description of a perfect system, hypnotized by the beauty and simplicity of our mathematical models. And we become disheartened and frustrated when real systems don’t behave like this. But this is not necessarily a problem with nature; it can also be thought of as a problem with our models! Sure, we can make nature behave more like our models by polishing those bumpy circles. But we can also make our models behave more like nature too. We can meet half way, and have the best of both worlds. And personally I’d rather build problem solving things (and models of them!) with natural bumpiness than spend my entire life trying to polish circles to an unachievable smoothness.

Teaching Artificial Intelligences using Quantum Computers

This post will hopefully give a very high level overview of the basic idea behind applying natural quantum computing to artificial intelligence. It is not a post describing programming or algorithms per se, but it will give some good background introduction to how quantum computers can be used for learning tasks. I apologize in advance for the length of this post, but please don’t let it put you off!

Introduction

When people learn about quantum computers, they are generally told that these systems will be useful for code-breaking and factoring large numbers. But this is not the only way that you can use quantum effects to gain an advantage. The processors built by D-Wave use a process called quantum annealing (a variant of natural quantum computing). This approach is very useful for solving a different set of problems, such as those found in machine learning and artificial intelligence tasks. To begin, I will have to explain a little about one way in which machines can learn.

Artificial brains

In order to begin our journey into quantum computing and AI, you will need to be familiar with the concept of an ‘Artificial Neural Network (ANN)’. The investigation of ANNs gained momentum in the 1960’s, when basic computer equipment began to provide the ability to model connected input-output systems. The most basic of ANN models is known as the ‘perceptron’, and it functions by making its output depend on what inputs it received. (A bit like a logic gate). Here is a diagram of a perceptron:

A perceptron (decision making unit) with 3 inputs

A perceptron (decision making unit) with 3 inputs

You can think of the perceptron as being like a little decision making process, like a manager in a company. The manager takes several opinions (inputs), which are added together, and makes a yes/no decision based on these opinions. The weights denote how important each input is. A high weight will give a strong input, that has a lot of influence on the output, whereas a low weight will only influence it slightly. Note that a large number of small positively weighted inputs may be overwhelmed by a single strong negative input.

By stringing together many such units, large networks can be built up, and their behavior with respect to many inputs investigated. It turns out that some very complex behavior is possible even with very simple networks of these building blocks, as anyone who has worked in a company with lots of management meetings may well relate to!

How can a network of simple ‘input-output’ blocks learn?

Imagine taking a network of perceptrons, and comparing it to our manager doing the same thing. I should remark that our manager is very experienced and always makes the correct decision. The same information that is sent to the manager is also sent to our perceptron. In the case of the manager, he already knows how much he values each input (opinion), and in fact he makes decisions based on this. But in the case of our network, we don’t know what the manager is thinking. So instead, we set our weights (how much we value each ‘input’) to be completely random at first, and see what we get. The picture below shows our network undergoing 3 ‘training sessions':

Supervised learning using a neural network

The first column of the table shows the manager’s decisions and the network’s decisions after a first set of inputs is sent in. At first the results do not agree very well (only 25% correct compared to the manager), and so we adjust the weights slightly, and try again (second set of decisions). Note that the decisions that need to be made can be different each time resulting in different ‘correct’ answers! The second column shows that after adjusting the weights, there is an improvement. The network is now making the same decisions as the manager 50% of the time. But we can still do better! We adjust the weights a bit more, and on the third attempt the network is matching the manager’s decision 100% of the time.

The network has learned to behave in the same way as the manager by a process of gradual adjustment of weights and cross-checking against the manager’s ‘correct’ decision. This technique is known as supervised learning, and it is one way in which we can train or teach artificial intelligences facts about the world. So it really is that simple? In essence, yes. But there is a big problem lurking under the hood here that I glossed over.

Making networks bigger

In this example, the perceptron was exceedingly simple. There is no way that a network like this would be able to replicate the variety and subtlety of factors which go into making a complicated decision. As we make our perceptron circuit bigger and bigger, it is able to deal with more complex situations. But there is a problem. The number of weights to adjust also gets bigger and bigger. And the outcome that results in the adjustment of one weight may depend on the others in very subtle ways.

In my example discussed above, I stated ‘you adjust the weights and try again’. This is not too tricky with only 3 weights. But what if there were 10 weights? What if there were 100? How on earth would you know which ones to change, and in what order? You could try to be systematic about it and start by changing them one by one, each by a very small amount, and looking at the result. But exploring every single configuration of every single weight in this way is not a good idea. With 10 weights that can each be set to one of 10 values, the number of combinations you would have to try would be 1010 = 10 billion. With 100 weights and 100 values for each, the number of combinations would total more than the number of atoms in the universe. Finding the correct configuration of all these weights is a very hard problem. So how do people solve problems like this? I mean, people use large neural networks all the time to help machines learn, so there must be a way…

And indeed there is. Finding the best possible combination of all the weights may not be possible, but one can try and find a good compromise. There are many, many tricks that one can try, but in effect they are all just various ways of doing exactly the same thing – adjusting those weights to get to a good combination that best matches the manager’s behavior. This is known as an optimization problem. We have reduced the entirety of learning to an optimization problem! Why should we care? Well, it so happens that certain types of quantum computer can be very good indeed at solving optimization problems…

Finding a good combination

I’ll start by introducing a very common method of trying to find the best combination of all the weights, and then explain how introducing quantum effects can make this method better.

One way we could look for good combinations of weights is by starting with the weights set to random values, then picking one weight (again at random), and then seeing if adjusting it helped or hindered our progress towards correct decisions. If it did help, keep it at that setting, then pick another one and try adjusting that. Again, keep it only if things improve. This will ensure that you always making your system better. The problem with this approach is that you can end up thinking you have a good combination, whereas if you had started with something vastly different for your initial choice, you may have got a much better solution. You’d never know. Researchers in the field call this method gradient descent, and there are numerous variants on this simple way of doing it, however they all suffer from this problem to some extent.

A method known as simulated annealing tries to get around this problem by sometimes allowing the weights to change even if this makes your final outcome seem to get worse for a while. (This is similar to making a ‘sacrifice move’ in chess or another board game, whereby accepting a loss such as the opponent taking one of your pieces puts you in a good position to execute a clever strategic move sometime in the future). The idea is if you let these sacrificial moves happen a lot during the initial training phases, and reduce the number of times that you allow it to happen later in the training, you generally end up with a much better set of weights. In simulated annealing, the sacrificial moves (adjusting the weights to make things worse) are pretty much chosen at random, in the hope that some of them will help.

Quantum annealing is similar to this approach, but uses quantum effects to help the system automatically adjust its weights in a smarter way. This is because the quantum mechanical concepts of superposition and entanglement help the system. Being able to put your weights into what is known as a quantum mechanical ‘superposition’ of states (each weight can be several values at the same time) allows the system to see in advance where the best combinations might lie. (Again, using the chess analogy, this is similar to looking many moves ahead, with the player imagining the pieces being in all different combinations before making the best move). If quantum annealing is working properly, the system will know which sacrificial moves to make along the way, and should always find the best combination of weights at the end.

Classical versus Quantum Annealing

Classical versus Quantum Annealing

A quantum education system for programs

You’ve now found your optimal set of weights by utilizing a quantum computation. Awesome. But wait… There’s an extra bonus here.

Unlike other quantum algorithms, using a quantum computer for learning means that you don’t just get a number out at the end. You actually get a trained program (network of weights). Which means our quantum computation is a manager-generator! So don’t need the quantum computer once it has trained the manager, just as you don’t need an entire business school to make corporate decisions. The quantum computer can then be set to work training other things whilst the manager-programs themselves go out into the world to do great things! Of course, your manager could always go back to the quantum-training school to improve further. The interesting thing is that the quantum computer doesn’t just have to train one type of program (a manager). It can teach almost anything given enough data about the real world. It could train medical-diagnosis programs, image-recognition programs, or even programs that summarize the key concepts behind a book or a scientific paper. The bigger the quantum system, the more weights it can adjust, and more high-level the concepts that can be learned.

Of course, my business analogy wasn’t purely accidental. Maybe one day soon quantum-trained programs will be predicting the stock market, and making investment decisions based on the results. Business may never be the same again! Whatever subject one chooses to focus on at this quantum-school, I personally think that the ability to train programs to go out into the world and solve problems themselves, and to generate machine intelligences that learn, is much more exciting than factoring large numbers.