It’s like the quantum computer is playing 20 questions…

I’ve been thinking about the BlackBox compiler recently and came up with a very interesting analogy to the way it works. There are actually lots of different ways to think about how BlackBox works, and we’ll post more of them over time, but here is a very high level and fun one.

The main way that you use BlackBox is to supply it with a classical function which computes the “goodness” of a given bitstring by returning a real number (the lower this number, the better the bitstring was).

Whatever your optimization problem is, you need to write a function that encodes your problem into a series of bits (x1, x2, x3…. xN) to be discovered, and which also computes how “good” a given bitstring (e.g. 0,1,1…0) is. When you pass such a function to Blackbox, the quantum compiler then repeatedly comes up with ideas for bitstrings, and using the information that your function supplies about how good its “guesses” are, it quickly converges on the best bitstring possible.

So using this approach the quantum processor behaves as a co-processor to a classical computing resource. The classical computing resources handles one part of the problem (computing the goodness of a given bitstring), and the quantum computer handles the other (suggesting bitstrings). I realized that this is described very nicely by the two computers playing 20 questions with one another.

Quantum computer 20 questions

The quantum computer suggests creative solutions to a problem, and then the classical computer is used to give feedback on how good the suggested solution is. Using this feedback, BlackBox will intelligently suggest a new solution. So in the example above, Blackbox knows NOT to make the next question “Is it a carrot?”

There is actually a deep philosophical point here. One of the pieces that is missing in the puzzle of artificial intelligence is how to make algorithms and programs more creative. I have always been an advocate of using quantum computing to power AI, but we now start to see concrete ways in which it could really start to address some of the elusive problems that crop up when trying to build intelligent machines.

At D-Wave, we have been starting some initial explorations in the areas of machine creativity and machine dreams, but it is early days and the pieces are only just starting to fall into place.

I was wondering if you could use the QC to actually play 20 questions for real. This is quite a fun application idea. If anyone has any suggestions for how to craft 20 questions into an objective function, let me know. My first two thoughts were to do something with Wordnet and NLTK. You could try either a pattern matching or a machine learning version of ‘mining’ wordnet for the right answer. This project would be a little Watson-esque in flavour.

6 thoughts on “It’s like the quantum computer is playing 20 questions…

  1. Of course I’m missing something special, I got distracted half way through reading that… But still, the articles there say something about this all and to what end it really is, I suppose. Keep up with the interesting content!🙂

  2. On the 20 Q’s game: you could start by having the d-wave computer run a vocabulary search for so and so words as they are spoken (excuse me for being so obvious or not and assuming d-wave can record any sort of words and then analyse them with it’s software), and then save that as ‘external user input’. Then the computer would scan it’s database of words with the ‘external user input’, and see where they match up. I don’t know if that is helpful or just annoying blab from the internet, but yeah…

  3. Pingback: What’s in a Name | Wavewatching

  4. I think the fun part of twenty questions comes about in ‘knowing’ the person asking the questions – i.e. guessing what a specific user is likely to think of. So yes, word searches are a good place to start, but I think the fun would come in analyzing the specifics of the system/user asking the questions and mapping it’s relation to other previously encountered systems/users. Such that it would be more likely to guess kangaroo if the user was Australian and more likely guess wildebeest if the user was South African. The question is how much information (and which) to feed the system about its opponent.

    Really it’s a game of useful generalizations where measuring the ‘goodness’ of a learning algorithm (rather than simply the goodness of the guess) is the fun part. So if you have multiple algorithms learning to play twenty questions, another external system should be in place to guess which is likely to succeed based on the user.

Leave a Reply

Please log in using one of these methods to post your comment: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s