Strange ideas about intelligence (while my QUBO solver runs)

Having just read Jeff Hawkins’ On Intelligence, Danny Hillis’ PhD thesis The Connection Machine and Peter Norvig’s Artificial Intelligence: A Modern Approach, I have been having some strange ideas about intelligence.

What got me thinking about intelligence in the first place was the observation that many of the tasks that seem to be difficult for computers, but relatively easy for biological brains, are most naturally thought of as NP-hard optimization problems. Basically anything that involves complex pattern matching–recognizing speech, inference, relational database search, vision, and learning, for example.

Another thing that seems interesting is this: take any algorithm that scales linearly with input size. For the problem this algorithm solves, can you think of a single example where a human could beat a computer? I can’t think of one.

Finally: biological brains operate with small amounts of input data (five senses). For example if we look at a photograph the total data we receive is quite small.

Is it possible that the notion of complexity classes is important for asking the right questions about intelligence? Here’s a rough outline of an idea.

  1. Categorize all of the problems that biological brains have to solve as well-posed computational problems.
  2. The subset of these problems that can be solved with algorithms that scale polynomially can always be done better with silicon than with bio-brains. Note that this isn’t true in general–it requires the observation that the input problem instance size is small for bio-brains (# of pixels in a photograph eg.).
  3. There are problems that have survival value to solve that are harder than P.
  4. Brains evolved excellent heuristics to provide quick approximate solutions to the problems harder than P, and currently it is that subset where bio-brains beat silicon.
  5. In a hierarchy of difficulty, some problems will be too hard for bio-brains to evolve heuristics for. This means that the primary differentiator in this picture of things will be the “easiest hard problems”. The hard hard problems are too hard for evolution to create hardware heuristics for.
  6. The easiest hard problems (the group most likely to have good hardware heuristics and bad software heuristics) are NP-hard optimization problems.

In this view of the world, the key to mimicking bio-brains (at least from the computational viewpoint) is being able to quickly solve (in the heuristic sense) the types of NP-hard problems generally associated with intelligence. This is an algorithmic issue, and not so much a hardware capability issue. This argues against Ray Kurzweil’s implicit assumption that hardware capability is sufficient for intelligence to arise in machines. I tend to think now that breakthroughs in machine intelligence are going to come from algorithms–either new classical algorithms, or the capability to run quantum algorithms on quantum hardware, and not on Moore’s Law advances.

32 thoughts on “Strange ideas about intelligence (while my QUBO solver runs)

  1. Not so strange, for those involved in studying the cognitive abilities of spiders–particularly salticids (jumping spiders, visual hunters) of the genus Portia. They manage an incredible arsenal of complex hunting behaviours (aggressive mimicry, adaptive strategies, complex route selection to a prey item allowing for enroute concealment despite losing line-of-sight, etc.) using a phenomenally spartan budget of neurons. Despite the apparent lack of hardware, they possess a limited behavioural sophistication on par with some of the more successful mammalian hunters. Not sure it is a clear case of better algorithms, tho (not an apples-to-apples comparison, for starters). It may be the case that certain behaviours do not strictly require the level of complexity we think they do, or that Portia uses some of the same algorithms as all visual hunters but has evolved highly efficient processing capabilities for certain subtasks (analagous to using an ASIC instead of a general-purpose CPU) within its limited sphere of activities as a specialist predator of other spiders. Hope we find out some day…

    Here is a nice article on the critter:
    http://www.laweekly.com/general/quark-soup/the-itsy-bitsy-spider/12791/

    -Mike

  2. - “Finally: biological brains operate with small amounts of input data (five senses).”

    Sight – “The human retina contains about 125 million rod cells and 6 million cone cells.”

    Hearing – “In humans, the number of nerve fibers within the cochlear nerve averages around 30,000.”

    Taste – “Individual taste buds (which contain approximately 100 taste receptor cells),…”

    Smell – “Humans have about 40 million olfactory receptor neurons.”

    Touch – Ambiguous…could mean pressure, vibration, temperature, etc. Suffice it to say that there’s a load of input for our bio-brains from this sense.

    We’re not so simple creatures. 5 senses, but dealing with the absurd amounts of data is no trivial task.

    Regarding point 5. Human-extinction-in-the-near-future arguments aside, we still have loads of potential to eventually evolve / learn these “hard hard problems”.

  3. haz: Some problems clearly do have small inputs. The example that Hillis gives in the introduction to his thesis is a good one. Basically you are given a simple image (say 240×360 pixels black and white) and you are asked to describe the picture (Hillis chose something with horses and hills but you get the idea). This is a hard problem for machines and it has small input, and this sort of thing is what I was thinking about.

    Also even if you had to consider “analog lines in” equal to all of the numbers you quote, consider this: digitize the analog signal in 4-bit words and 0.1 second steps. There are (say) 150 million analog input lines. The digital data rate in is then ~ 6 gigabits per second in. This is a big number but it’s not absurd.

  4. Interesting idea, however I see no real reason to prefer the idea of “it’s a (quantum-)algorithm issue” to the original idea behind the connection machine, i.e. it’s about the connectivity. I mean all the symptoms you mentioned still apply if you assume that bio-brains do things efficiently as long as they have sufficient parallel resources to tackle the problem in one go. This would not mean it neccessarily has to do with complexity classes, but maybe more with absolute complexity and the possibility of parallelizing the problem, whatever that may mean on a braincell level.
    Even if you change your input data from above to 16bit, then AFAIK the internal connectivity of the brain is still orders of magnitude higher, in a “Terawires” kind if sense. Add to that the fact that neurons are mainly analog (if I remember my high-school biology correctly they take multiple analog inputs and produce analog output), and any digital system so far just looks pretty poor in comparison.

  5. I saw a presentation recently about the field of neurophysics, which was really interesting. It was done by a dude named Dr. Andre Longtin of Ottawa university. He presented a short introduction to the huge amount of research on fish called Gymnotus carapo. Long story short, this fish is being theoretically modeled (using poisson’s equation :) since it communicates through EM fields. (In something of a strange D-Wave twist, the majority of the research is done using SQUIDs….)

    Something really struck me though during that presentation.Of the computational power a brain uses for a given sense, 80-90% of it is used as part of a feedback loop. If I remember correctly, this is why people with hearing aids cannot pick out a conversation in a loud environment, or why an image can slowly change without your brain noticing.

    I guess this would imply that one would need to design a killer RTOS.

  6. I think if you want create real thinking brain emulator (which has for example only view and sound input), you need at least made map of all conections between individual neurons trough synapses (and this is almost inposible task). Also you need at least approximatly enough good simulate each neuron. But the bigest problem is that neurons time by time change electrical resistance between synapses dinpending on electrical activity in those synapses. If electrical activity is very seldom in synapses, then neuron lose conection with over neurons which connected trough those poorly electrical active synapses. And those unconnected synapses after they disconnection somehow finds overs neurons with which synapses will get frequent electrical activity and strong connection then will be find between neurons… So, okey, you can simulate each neuron, you can simulate electrical resistance on synapses, you can made exactly map of each neuron conection like in human brain (maybe after many 100 years), but you can’t predict how neurons will connect after losing they connections in poor electrical activity having synpases.
    I think, that to our brain simulation need just enormous (for now) computation power (about 10^20 FLOPS (100 ExaFLOPS) or maybe more) and need to solve those problems, which I mentioned (bigest are, to create neurons connection map and to describe algorithm how neurons must connect after they lost they connections in a poorly electrical activity having synapses).

  7. Quadratic Unconstrained Binary Optimization.

    likely one of the many classical heuristics used to solve a convex quadratic optimization problem without constraint functions on the lambdas you wish to find.

    http://portal.acm.org/citation.cfm?id=1231283

    its sometimes the bottleneck in AI computations like finding decision surfaces or something like that. since Dwave is using the Ising model they are likely finding the lambdas which occur when min(x’Hx + Ax) is found through their documented process, essentially solving in a QM way the classical convex QP problem without constraints. apologies if i got it wrong.

  8. QuantumGuest: yes that’s mostly right. QUBO and Ising can be mapped into each other (the only difference is whether you use binary (0,1) or spin (-1,1) variables), we use them interchangeably to describe our canonical NP-hard optimization problem.

  9. cell: Creating a brain emulator (ie an analog copy of parts of a brain) would be a really cool project. But what I am talking about here isn’t creating analog brains. I’m talking about mimicking what brains do, which is quite different. This focuses on what brains do, not so much on what they are.

  10. That would be quite an accomplishment. If we could get our heads around what brains do (laughs) then the details of a particular implementation could be left as an exercise for the interested reader. One approach mentioned by Kurzweil involved a high-resolution MRI capable of resolving details down to the scale of a single nerve axon (does not presently exist, as always). He suggests that knowing the exact topology of a specific brain–how every piece is wired to every other–and being able to compare that with other brains, would go a long way towards helping us understand what brains do. A given, I suppose. I wonder if we have enough of a grasp on the electro-chemical side of things that lacking a physical wiring diagram is the only thing holding us back? I’m sure the brain’s changeability and plasticity as a network might limit the utility of any type of snapshot mapping (unless you could replay it in VMware Cerebro).

    One thing that occurs to me: if you end up producing some sort of generic, abstract flowchart (NeuroVisio) for what a human brain does, it will necessarily include hierarchies for various survival responses and biological impulses evolved over millions of years. For instance, if you were exclusively interested in the pattern-matching capabilities of the brain’s visual system, you might not have an immediate use for the accompanying priority queue with fast triggers for issues of hunger, thirst, sex, fear, etc. (depending on your application; Testament’s “Malpractice” happens to be playing on the radio right now). Because these systems co-evolved together over a very long stretch of time, uncoupling them might not be a trivial task. Or then again, it might be–as I understand it, the auditory system’s high-priority channel to the amygdala consists of a single pathway; once broken, that “feature” goes away (along with the ability to react instantaneously to certain auditory cues without using the slower, more brain-intensive pathway; a distinct evolutionary disadvantage if you happen to look and smell like a weakly-defended, walking plate of sirloin). :)

    Cheers,
    Mike

  11. Mike: Actually the main point of my rambling was that even if we can define pretty clearly some things that the brain does (pattern matching for example) it isn’t easy to build machines to mimic those functions because the underlying problem is NP-hard…the details of a particular implementation can’t be left to the reader. This type of thing is well-known in machine vision.

    Note also that I don’t think we mean the same thing by “what the brain does”. I mean that if you treat it like a black box and spec its input/output functions, you get some set of capabilities. What’s actually going on inside the black box isn’t relevant at that level.

    You get radio stations that play Testament? THE BEATINGS ARE AS COLD AS ICE!!!!! … We have the world’s worst radio stations.

  12. potter: the whole issue that feedback seems to be very important for brains is fascinating. I read somewhere that in visual processing something like this happens, where a lot more information comes from inside than from outside…. I looked for the link but I couldn’t find it… something about how most of what we perceive with vision is actually “filled in” with what we expect to see.

  13. Doh–right. (Goes off to contemplate this on The Tree of Woe).

    Lacking decent local stations, you might consider dabbling in various net broadcasts (shoutcast and icecast streams–Google for station directories). Quality and consistency vary quite a bit, but you’re bound to find a few that suite your rather discriminating tastes in parlour tunes.

    -M

  14. There is thoughs, where can be stored memory in human brain. There is one thing with what i can’t agree. This thing is that our memory is stored in synapse connections. Why? Becouse scienties frozen rat brain to low temperature, when no electrical signals. After that scienties heat up rat brain to normal temperature. And after that scientiest made conclusion, what our memory is stored in synpses connections and synapses conductivity. But they didnt thought about one thing, what when they frozen brain, Ca ions (maybe) don’t move in/out from/to neuron and after neuron is unfrozen, then Ca ions also became unfrozen and return working as they work before, moving in/out from/to neuron.
    So maybe memory stored not in synapses and they conductivity?

    Does quantum computer can speed up Blue Brain project? If quantum computer can speed up neural network, then answer might be “yes”, but may there is some unexpected problems to such task?

  15. Oh, i don’t finish mind about rat. When rat was unfrozened, it was able to recognise labyrinth like before. So scientists make conclusion, what minds is stored not in electric signals, but in connections between neurons – the synapses…

  16. “How did these scientists freeze the brain without killing the rat?”

    I don’t, maybe just turn down temperature from 38 degrees to, say, 10 degrees and at 10 degrees temperature, say, electrical signals stop working…

  17. By the way, for more efficient algorithms, which don’t lie on “Moore’s Law” there is thoughs, to simulate brain directly with hardware ( http://www.stanford.edu/group/brainsinsilicon/documents/Neurogrid_Boahen_Sept06.pdf ). But his means, what novadays algorithms for simulating neural networks, don’t use hardware acceleration (like, for example, Transform&Lighting in 3D scenes). Why they don’t do that? Becouse neural network is unpredicted.
    Maybe quantum computing can speed up such tasks like artificial neural networks (speech recognition, image recognition), but can’t speed up in general to forming like biological brain thinking machine (becouse neural network in human brain isn’t combinatorical or quantum problem, if it not true, then brain wuold be quantum computer). http://www.cic.unb.br/~weigang/qc/aci.html

  18. I disagree that there is a small amount of sensor data coming in through the five human senses. The amount of data coming in through the eyes alone is mind-boggling.

    One thing that all of these senses do is, do a lot of pre-processing of the data, just so the brain can work with it. When you start to plug in visual images into a machine interface, you have to do a large amount of sensor manipulation, including a variety of filters, moving, cropping, etc. Overall however, you still have a vary large amount of information.

    With the Numenta technology, you can plug in a variety of sensors into the ImageSensor, but you still need to do a lot of pre-processing prior to feeding the data directly into the system. However, once the data is input, it does a remarkable job at finding the causes, or beliefs, in the data relatively quickly. Pattern recognition, IMO, is amazingly important!

  19. Some people is paralysed when they born. They can’t feel and move body lower neck. I don’t know that they feel, then they eating, but they almost has only two senses: ears and eyes, ou well and taste – 3 sense. They has so small amount data, but they no so stupid like primates. Okey, they also has input sense on face, but this is small part of all body.
    So in general i think, what to describe intelect, nessesary only two inputs (or maybe one of those two): Audition and Vision; and one ouput: Sound (Voice). Vision input has 2M cones inputs for each colour (2*3=6milions) and human can recognise 7M colours (each colour has roughly 200 levels of brightnes). Then Vision input is 6M pixels. Number states in witch can be input is: 6M*7M=10^13 (becouse is 200 brightnes levels for each cone and becouse is 3 types of cone then total colours number is 200^3=8000000 and is 6000000 cones). So human input resoliution is roughly 6Mpixel (3000*2000) and has 7Milions colours (to compare computer resoliution is rougly 1.3Mpixel (1280*1024) and 16Milions colours). So Vision input is almost like computer output. So no reason to think, what to human need very wide input to be a human (another thought is what human brain can developing, becouse it can fell pain and pleasure).

  20. By the way some people can ask why need HDR, if computer visual data output is like human visual data input? Becouse retina has roughly 1000 brightnes levels, when eye adaptives to brighter or darker place, but this don’t change things, and human input colour deep is always the same – 7000000 colours. So HDR needed to create wide range of colours and brightnes and, then to choose only small part of this range (like retina adaptive to one or another level of brightnes)…

    Now let’s talk about Audition. Some scientist, think what if you wanna hear stereo sound, you need what you ears sound reach at diferent time. I think this is not true, becouse the diferent ears saond reach with diferent amplitudes and i think this is enough to hear stereo sound.

    Another very inpurtant thing to human intelect developing may be eating and hard touching (pain receptors in the skin). Eating is very important becouse it is connected with pleasure and unpleasure centers in brain. So if developing (baby) brain get milk, then brain feel pleasure and if don’t get – unpleasure (some kind of pain). To brain developing may be very important is this pleasure and unpleasure feeling to give reasons to baby to search food. Pain receptors in skin works similar and gives equivalent motive to do or not to do something…
    Does can brain developing without feeling pain and pleasure inputs (like with/without eating, pain from hard touching and so on)? Becouse when you eat something brain feel pleasure long time, becouse there is glucose in your blood and tester check, what this is enough for now and you dont feel unconfortable without eating a long time…
    Now, let’s talk about temperature and pressure receptors in skin. I think those receptors is very uninportant for intelect developing in brain (except then temperature or preasure reach some limits, when you feel pain…).
    Motoric receptors (to move), also doesn’t important to intelect developing in brain, becouse there are paralysed people, but they has the same intelect like over peoples.
    So what will be if brain developing not inside mother but somthere in labaratory without any feelings, except Vision and Audition input and with Voice output? Particularly to this question, answer can be given from here ( http://vesicle.nsi.edu/nomad/darwinvii.html ). Particularly, becouse brain there, I think, was developing inside mother (so a bit input data was from mother) and after that input data is a poor…

  21. Geordie,

    If you want to pick a complexity class that corresponds to human intelligence, I would suggest that PSPACE is a better fit than NP. NP can be viewed as the class of all 1-player games or puzzles. PSPACE can be viewed as the set of all 2-player games (without looping). Allowing looping gets you into EXPTIME; Chess and Go have both been proven EXPTIME-complete but the proofs require setting up elaborate multiple loops that interact. Actual human play is much more PSPACE-like.

    Most evolution is co-evolution; competition and cooperation with other organisms is the crux. Having models of other “players” (agents with plans and goals of their own) seems to be hardwired into the vertebrate brain. We even see them when they’re not there (animism, paranoia, religion, etc.). NP problems have no need of this, only PSPACE or harder can explain it.

    Neural networks can be surprisingly powerful. It’s been shown, for example, that a vehicle controller which will track a light source and follow it can be built with only 2 neurons. They are also Turing-universal, as was succinctly proven by Gruau’s Pascal-to-neurons compiler. Minsky’s perceptron paper was utterly wrong.

    Howard

  22. Howard:

    Interesting… but most grand challenge problems in AI today are np-hard optimization problems. I think you can pose most of the multi-player scenarios vertebrates deal with as some set of optimization problems… for example if you need to determine if the thing running at you is a tiger or a goat you’re solving an object recognition problem that’s most naturally stated as an np-hard optimization problem. Most perception problems (vision, natural language, speech, ocr) are the same, you need to be able to match a collection of instances in the thing your senses are feeding you to a collection of instance classes in your brain.

    I don’t agree that we have good models of other “players”. I think we’re very good at identifying them and what they’re doing, and using this knowledge for some minimal inference, but the core of what we call human intelligence is really just as simple as being able to quickly compare two complex objects.

  23. Oh and the evolutionary basis for religion is probably pretty simple and doesn’t have anything to do with computational capability of the brain etc. There’s a clear survival advantage for young children doing what they’re told without understanding why. Not being evidence-based is clearly good for you when you’re learning about how to interact with a dangerous world. If you say “don’t go near the crocodiles or they’ll eat you” and your kid is a scientist they will test whether your statement is in fact correct and get eaten. If they take it “on faith” they live. This will select for a species that believes things without questioning them.

    If you take a young child and repeatedly tell them ANYTHING they will believe it no matter how ridiculous…. and these things are very hard to shake when the kid grows up. The fact that religion is the direct by-product of natural selection is a pretty cool irony.

  24. I agree with you on Ray Kurzweil in that sheer horsepower is not going to produce human-equivalent intelligence. But human intelligence is not always precise, in fact it rarely is. Unless you are a mathematician or an accountant, “good enough” is the heuristic. The mind must choose to be precise. I constantly hear otherwise well educated people apply fallacious reasoning to poor evidence. The reasoning “post hoc ergo propter hoc” is very widely used. When I consider the Turing Test, I believe that software emulating a more average person – i.e. IQ no more than one standard deviation from the mean – would be far more convincing than a brilliant one.

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s