The dreams of spiritual machines

When I was in middle school, every year we had to select a project to work on. These projects came from a list of acceptable projects. The projects were typical science-ish projects you’d expect a seventh grader to take on. One year my project was about whooping cranes. Not sure why I picked that one. Maybe I thought it might be related to whooping cough.

One year the subject I picked was dreams. What were they? How did they come about? What, if anything did they tell us about our waking life? I remember being intensely fascinated by the topic at the time, feeling that the answers I was getting to my questions from grown-ups and the encyclopedias checked out from the school library (there was no internet back then, at least in a form I could access) were not satisfactory at all. This was one of my earliest realizations that there were questions no-one yet knew the answers to.

The subject of dreams has come up in my adult life several times, and each time the same questions about them bubble up from my early encounter with them. An acquaintance of mine went through a period of having night terrors, where she would scream so loud that it would wake people in neighboring houses. She described them as being a sense of horror and dread of the most intense and indescribable kind, with sure knowledge that it would never end. This led to multiple 911 calls over periods of years. Several trips to specialists and tests revealed nothing out of the ordinary. Then one day they suddenly stopped. To this day no one has a good explanation for why they started, or why they stopped.

One of my friends has multiple vivid, realistic dreams every night, and he remembers them. They are also often terrifying. I on the other hand rarely dream, or if I do, I don’t remember them.

Recently I have been thinking of dreams again, and I have four computer scientists to thank. One of them is Bill Macready, who is my friend and colleague at D-Wave, and inventor of the framework I’ll introduce shortly. The second is Douglas Hofstadter. The third is Geoff Hinton. The fourth is David Gelertner.

Gelertner is a very interesting guy. Not only is he a rock star computer scientist (Bill Joy called him “one of the most brilliant and visionary computer scientists of our time”), he is also an artist, entrepreneur and a writer with an MA is classical literature. He was injured badly opening a package from the Unabomber in 1993. He is the author of several books, but the one I want to focus on now is The Muse in the Machine, which  is must-read material for anyone interested in artificial intelligence.

In this book, Gelertner presents a compelling theory of cognition that includes emotion, creativity and dreams as a central critically important aspect of the creation of machines that think, feel and act as we do. In this theory, emotion, creativity, analogical thought and even spirituality are viewed as being essential to the creation of machines that behave as humans do. I can’t do the book justice in a short post – you should read it.

I am going to pull one quote out of the book though, but before I do I want to briefly touch on what Geoff Hinton has to do with all of this. Hinton is also a rock star in the world of artificial intelligence, and in particular in machine learning. He was one of the inventors of back propagation, and a pioneer in deep belief nets and unsupervised learning. A fascinating demo I really like starts around the 20:00 mark of this video. In this demo, he runs a deep learning system ‘in reverse’, in generative mode. Hinton refers to this process as the system “fantasizing” about the images it’s generating; however Hinton’s fantasizing can also be thought of as the system hallucinating, or even dreaming, about the subjects it has learned. Systems such as these exhibit what I believe to be clear instances of creativity – generating instances of objects that have never existed in the world before, but share some underlying property. In Hinton’s demo, this property is “two-ness”.

Alright so back to Gelertner, and the quote from The Muse in the Machine:

A computer that never hallucinates cannot possibly aspire to artificial thought.

While Gelertner speaks a somewhat different language than Hinton, I believe that the property of a machine that he is referring to here – the ability to hallucinate, fantasize or dream – is exactly the sort of thing Hinton is doing with his generative digit model. When you run that model I would argue that you are seeing the faintest wisps of the beginning of true cognition in a machine.

Douglas Hofstadter is probably the most famous of the four computer scientists I’ve been thinking about recently. He is of course the author of Godel, Escher, Bach, which every self-respecting technophile has read, but more importantly he has been a proponent for the need to think about cognition from a very different perspective than most computer scientists. For Hofstadter, creativity and analogical reasoning are the key points of interest he feels we need to understand in order to understand our own cognition. Here he is in the “Pattern-finding as the Core of Intelligence” introduction to his Fluid Analogies book:

In 1977, I began my new career as a professor of computer science, aiming to specialize in the field of artificial intelligence. My goals were modest, at least in number: first, to uncover the secrets of creativity, and second, to uncover the secrets of consciousness, by modeling both phenomena on a computer. Good goals. Not easy.

All four of these folks share a perspective that understanding how analogical thinking and creativity work is an important and under-studied part of building machines like us.

Recently we’ve been working on a series of projects that are aligned with this sort of program. The basic framework is introduced here, in an introductory tutorial.

This basic introduction is extended here.

One of the by-products of this work is a computing system that generates vivid dreamscapes. You can look at one of these by clicking on the candle photograph above, or by following through the Temporal QUFL tutorial, or by clicking on the direct link below.

The technical part of how these dreamscapes are generated is described in these tutorials.  I believe these ideas are important. These dreamscapes remind me of H.P. Lovecraft’s Dreamlands, and this from Celephais:

There are not many persons who know what wonders are opened to them in the stories and visions of their youth; for when as children we learn and dream, we think but half-formed thoughts, and when as men we try to remember, we are dulled and prosaic with the poison of life. But some of us awake in the night with strange phantasms of enchanted hills and gardens, of fountains that sing in the sun, of golden cliffs overhanging murmuring seas, of plains that stretch down to sleeping cities of bronze and stone, and of shadowy companies of heroes that ride caparisoned white horses along the edges of thick forests; and then we know that we have looked back through the ivory gates into that world of wonder which was ours before we were wise and unhappy.

I hope you like them.

11 thoughts on “The dreams of spiritual machines

  1. i don’t see much creativity in these so-called dream-like videos. To me, it looks more like a time-averaged, blurred version of the original.

    I didn’t expect anything else by the way.

    Nevertheless, your theory about creativity sounds interesting.

  2. This demo was the very simplest thing that you can do with this approach – it will get much more impressive over time🙂

    Or looking at it a different way, one could say that dreams *are* time-averaged, blurred versions of the original…

    • “This demo was the very simplest thing that you can do with this approach – it will get much more impressive over time”
      — I guess it was🙂

      “Or looking at it a different way, one could say that dreams *are* time-averaged, blurred versions of the original…”
      — This seems far-fetched to me. Dreams look more like little “pieces” of life as lived or only heard off, put together in a single continuous sequence. So they look very vivid, not blurred.

      I wonder what it will look like when you feed the algorithm with much more various, life-like videos. I’ll have to read more on the subject anyway🙂.

      • “I wonder what it will look like when you feed the algorithm with much more various, life-like videos”

        We’re going to try that next (or at least work up to it!) Should be a cool experiment, hopefully we can post more results on here as they come in🙂

  3. This reminds me of how amazed I was when I first saw, or rather heard, the HARMONET program demoed (http://tinyurl.com/84ysq2t). It harmonizes melodies in the style of J.S.Bach. While this is a somewhat mechanistic application of musical improvisation it is the deviations from the narrow rules that make harmonies appealing and that appear genuinely creative.

    Being just a recreational piano player it was hard to admit that this program with its rather limited amount of feed-forward neurons was doing a better job than I could. Also very hard to believe that this was already twenty years ago. In the meantime we have now moved to real time machine music improvisation. Once these machines learn to dream musically you’d expect that genuinely creative and appealing compositions can’t be far behind🙂

  4. A very great demo!
    In figure 5 of the tutorial (http://www.dwavesys.com/en/dev-tutorial-qufl.html) you mention that the blackbox takes over when the time for the exact enumeration solver starts to increase indicating a speed up with the blackbox. Are there any stats on how much faster the blackbox is?

    Also, is there an upper bound on the number of features the black box can manipulate? I am curious if there is an analogous “forgetting” effect when the system is trying to develop features for more and more dissimilar images and bumps into a capacity limit.

    • Hi Hal, those are very good questions!

      Blackbox is a better choice when you are using 10 or more features. Since BlackBox is heuristic (no optimality guarantee) it’s hard to give a single figure of merit for speed-up, but roughly you get a speed-up of about 2^sqrt(K) when using BlackBox over enumeration for K features.

      There is no hard upper bound on number of features. However the algorithms in the D-Wave developer kit scale exponentially in the square root of the number of variables. So eventually you have a situation where in practice the learning phase takes too long — that’s why we need the superconducting quantum hardware!

      There are “forgetting” effects. We have an online version of the learning algorithm, where the features change as Butterfly “sees” new images. When you have a small feature set, Butterfly “forgets” what it used to know and replaces it with what she just saw. It is fascinating. How to architect the feature sets for short vs. long-term memory, and how to deal with new inputs that are unlike what has been seen before, are open problems we’re working on now.

  5. Pingback: Out of the AI Winter and into the Cold | Wavewatching

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s