18 thoughts on “Ten beautiful computers

  1. From the article: “D-Wave Systems of British Columbia announced a prototype quantum computer in January, 2007. It can play Sudoku.”

    First comment: “Is Sudoku particularly difficult for a computer? I’d be more impressed if it could play Go.”

    Geordie when are you going to give us a quantum go program?!?!

  2. Hi Dave, I don’t think there is a known way to map Go into a constraint satisfaction or optimization problem. Is this correct? I don’t know anything about Go, although Sudoku obviously differs in that it is just a simple 1-person constraint satisfaction exercise whereas a Go player would have to solve problems for each step of the game given updated information. If you can map each step to (say) SAT or MAX-SAT then it might be possible. If anyone wants to try you can build and test any application you can think of using the web services link off the homepage or in the sidebar here.

  3. Geordie,

    I guess that comment was referring to the fact that while modern computers/programs can beat the best human Chess player (some say that Kasparov is the best one *of all times*), they can not beat medium-‘dan’ players in Go, and not for the lack or trying (Japanese have worked on Go programs apparently for as long as Westerners worked on Chess ones).

  4. Actually, good amateurs can beat the best Go computers/programs. That being said, I can still have my ass handed to me from time to time (maybe all the time now, it’s probably been a year since I played).

  5. Hmm, a grid of +1/-1/undecided bits… Which one flips next to maximize my energy (and minimize the opponents)…?

    Casual reading on Wikipedia about why computers are bad playing Go (actually, I was wrong — they can not reliably beat even low ‘dan’s, not medium level!) is not only because of much faster growing rate of decision tree size than, say, in chess, but because there is no obvious way to evaluate strength of any given position (maybe assign its minimum energy in Ising spin glass sense?🙂 ).

    OK, OK, I will shut up now…

  6. I think it’s quite amusing that they’ve shown a picture of the filter block rather than the actual processor. Although it is a really neat photo.

    Quantum Go would be cool. One of the main aspects of Go is the ability of good players to perform pattern matching to known ‘board sub-configurations’ on both the current move and whilst thinking about potential moves ahead. Now if I remember correctly your QC would be quite good at this type of thing🙂 I think the energy minimization of the current move / a few moves ahead combined with pattern matching (perhaps developed through a learning algorithm) might be a good approach.

    Here’s an interesting thesis on learning patterns in Go:


  7. I was thinking more of approaching building a Go player from the deep learning perspective via training from examples (all recorded games in history for example). Evaluating outcomes of different sequences of moves in a game with that many possible sequences probably can’t work, you need a different strategy. Also if humans are good at the game that is a clue, biological brains don’t evaluate 2^whatever possibilities, they use strategies that have worked for them in the past (ie a learning based strategy makes a lot more sense). Given that most deep learning is closely related to Ising models there might be a possibility to build something here.

  8. Side track:
    – Go programs have actually overcome leaps and bounds as of late. And it /is/ something where more computational power can help. Its all just Monte Carlo sampling so distributed computing would definitely help as well. Look up MoGo for more info.

    On track:
    – The approaches for Go typically don’t include CSP type formalizations. It’s either random sampling of playouts or pattern databases for small board sections. Technically it should be encodable, but the uncertainty of what your opponent will do makes it brutal (even if the world is fully observable).

    Different Track:
    – You should take on the E2 challenge😉

  9. Go on ever increasing board sizes is PSPACE-hard (unless you restrict to games that end in polynomial time in the board size)…I think actually someone showed that it is EXPTIME-complete with Japanese rules. (The decision problem is: given a board layout, is there a move that forces a win.)

  10. Ooh! A quick calculation after some investigation makes it look like with a big enough cluster (let’s say 1,024 quad-core nodes) and maybe a few months or a few years, E2 should be possible to brute-force if it’s sufficiently uniform random. It may cost more than $2M to buy and run such a cluster for a year, but oh well. 😉

    If you’d like an Ising model for it, it can definitely be done with <39,000 variables of max degree <1,200. There’s probably a way of doing it with many fewer variables, but the relations between the variables would probably be more complicated than quadratic (e.g. there’s a way I can think of where they’re quartic with <6,000 variables).

    If anyone else wants to do more thorough calculations, there are at least 19 (and probably not many more) patterns that can appear on a tile’s edge, not including the gray edges.

  11. Hi Neil, we did a pretty exhaustive investigation of E2 and I don’t think you can brute force it. The problem is too big. Also all of the obvious heuristics fail badly. I can send you our write-up on the problem.

  12. That’s a shame. 😦

    I guess they made it harder than completely random (which would be pretty easy to do), ’cause with independent, uniform, random distribution of tile edges amongst the 19 options, given a border in a particular configuration, there’s only about a 10% chance of the search making it past the first inner row ((14*13+1)*4/19^3=0.107 is upper bound of the expected number of tiles matching the last position), and a bit less for each subsequent row if it makes it that far. Of course, there are probably a lot of possible border configurations, but they’d get trimmed down by matching too.

    I’d ask for a list of tiles, but I’m having enough trouble finding time for things like finding an apartment for June and answering email. Maybe I’ll take a gander once I’m out there.

  13. E2:
    – It’s all about the phase transition. The bugger picked a problem in the hard zone to avoid most of the common approaches to the problem. An exhaustive search doesn’t need to mean brute force though…just think of where we’d be without clause learning in 3sat solving😉.

    – That link is the MoGo guys😉. So they’ve just bean a 9d with 7stone handicap. Most pro games go down to the wire (few stones difference), and each stone handicap is considered to be worth ~10 points. So the software still has 70 points to make up in the game, but the ability of Go programs (specifically MoGo) has been accelerating as of late. I’d only give it another 5 years TBH.

  14. Okay. There are also different definitions of “brute force”, like how many would consider insertion sort “brute force sorting”, but there’s always the exhaustive search of all possible permutations of the input which is much more “brute force”. In the Computational Geometry course I took, there was a case where Jit (the prof) asked for a brute force way of solving some problem just to give some upper bound on the time, and I just said that you can check all 2^n possibilities and return the one that works. He was a bit surprised, since he was expecting a fairly simple n^3 algorithm.😀 One might say that even knowing problem properties that we take for granted can often help solve the problem faster.

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s