Brainstorm 6: All change

In my last Brainstorming session I was musing on associations and asked myself what is being associated with what, that enables a brain to make a prediction (and hence perform simulations). A present state is clearly being associated with the state that tends to follow it, but what does that mean? It’s obvious for some forms of information but a lot less obvious for others and for the general case. Learning that one ten-million-dimension vector tends to follow another is neither practical nor intelligent – it doesn’t permit generalization, which is essential. Something more compact and meaningful is happening.

If the brain is to be able to imagine things, there must be a comprehensive simulation mechanism, capable of predicting the future state in any arbitrary scenario (as long as it’s sufficiently familiar). If I imagine a coffee cup in my hand and then tilt my imaginary hand, the cup falls. I can even get a fair simulation of how it will break when it hits the floor. If I imagine myself talking to someone, we can have a complete conversation that matches the kinds of thing this person might say in reality – I have a comprehensive simulation of their own mind inside mine. It’s comparatively easy to see how a brain might predict the future position of a moving stimulus on the retina, but a lot less obvious how this more general kind of simulation works. Coffee cups don’t have information about how they fall built into their properties, nor do they fall on a whim. Somehow it’s the entirety of the situation that matters – the interaction of cup and hand – and knowledge of falling objects in general (as well as the physical properties of pottery) somehow gets transferred automatically into the simulation as needed.

Pierre-Simon Laplace once said: “An intellect which at a certain moment would know all forces that set nature in motion, and all positions of all items of which nature is composed … the future just like the past would be present before its eyes.” In other words, if you know the current state of the universe precisely then you can work out its state at any time in the future. He wasn’t entirely right, as it happens – if Laplace was himself that intellect, then he would also be part of the universe, and so the act of gathering the data would change some of the data he needed to gather. He could never have perfect knowledge. And we know now that the most infinitesimal inaccuracy will magnify very rapidly until the prediction is out of whack with reality. But even so, in practical terms determinism works. If our artificial brain knew everything it was capable of knowing about the state of its region of the universe (in other words, the value of a ten-million-dimensional vector) then it would have enough knowledge to make a fair stab at the value of this vector a short while later. If that weren’t true, intelligence wouldn’t be possible.

But Laplace had a very good point when he mentioned “all forces that set nature in motion.” It’s not just the state of the world that matters, but the rate and direction of change. It’s an interesting philosophical question, how an object can embody a rate of change at an instant in time (discuss!). It has a momentum, but that’s dodging the issue. Nevertheless, change is all-important, and real brains are far more interested in change than they are in static states. In fact they’re more-or-less blind to things that don’t change – quite literally. If you can hold your eyes perfectly still when focusing on a fixed point, you’ll go temporarily blind in a matter of seconds! Try it – it’s not easy but it can be done with practice and it’s quite startling.

Getting preoccupied with recognizing objects, etc. fails to help me with this question of prediction, and vision is misleading because it’s essentially a movement-detection system that has been heavily modified by evolution to make it possible to establish facts about things that aren’t moving. The static world is essentially transformed into a moving one (e.g. through microsaccades) before being analyzed in ways we don’t understand and may never be able to, unless we understand how change and prediction are handled more generally. So how about our tactile sense? Maybe that’s a good model to think about for a while?

Ok, I’ll start with a very simple creature – a straight line, with touch sensors along its surface. If I touch this creature with my finger one of the sensors will be triggered (because its input has changed), but will soon become silent again as the nerve ending habituates. At this point the creature can make a prediction, but not a very useful one: my finger might move left or it might move right. It can’t tell which at first, but if my finger starts to move left, it can immediately predict where it’s going to go next. It’s easy to imagine a neuron connected to a pair of adjacent sensors, which will fire when one sensor is triggered before the other.

Eureka! We have a prediction neuron – it knows that the third sensor in the line is likely to be triggered shortly. In fact we can imagine a whole host of these neurons, tuned to different delays and hence sensitive to speed. Each one can make a prediction about which other sensors are likely to be touched within a given period. We can imagine each neuron feeding some kind of information back down to the sensor that it is predicting will be touched. The neurons have a memory of the past, which they can compare to the present in order to establish future trends. The more abstract this memory, the more we can describe it as forming our present context. Context is all-important. If you’ve ever woken from a general anesthetic, you’ll know that it takes a while to re-establish a context – who you are, where you are, how you got there – and until you have this you can’t figure out what’s likely to happen next.

So far, so good. We have a reciprocal connection of the kind that seems to be universal in the brain. We can imagine a further layer of neurons that listen to these simpler neurons and develop a more general sense of the direction and speed of movement, which is less dependent on the actual location of the stimulus. By the time we get a few layers deep, we have cells that can tell us if the stroking of my finger is deviating from a straight line (well, we could if my simplified creature wasn’t one-dimensional!).

But what’s the point of feeding back this information to the sensory neurons themselves? The first layer of cells is telling specific sensory neurons to expect to be touched in a few milliseconds. Big deal – they’ll soon find out anyway. Nevertheless, two valuable pieces of information come out of this prediction:

Firstly, if a sensory neuron is told to expect a touch and it doesn’t arrive, we want our creature to be surprised. Things that just behave according to expectations can usually be safely ignored, and we only want to be alerted to things that don’t do what we were expecting. Surprise gives us a little shock – it causes a bunch of physiological responses. We may get a little burst of adrenaline, to prepare us in case we need to act, and our other sensory systems get alerted to pay more attention to the source of the unexpected change (this is called an “orienting response”). Neurons higher up in the system are thus primed and able to make decisions about what, if anything, to do about this unexpected turn of events. The shock will ripple up the system until something finally knows what to do about that sort of thing. Most of the time this will be an unconscious response (like when we flick an insect off our arm) but sometimes nothing will know how to deal with this, and consciousness needs to get in on the act.

Secondly, once we have a hunch about where the stimulus is going to show up next, we can start to look further ahead to where it is likely to be heading. The more often our low-level predictions are confirmed, the more confident we can be, and the more time we’ve had in which to make this ripple of predictive activity travel ahead of the stimulus, to figure out what might happen in a few moments’ time. Perhaps my finger is stroking along the creature towards a tender spot that will hurt it; perhaps it’s moving in the other direction, towards the creature’s mouth, where it has a hope of eating my finger. Pain or pleasure get predicted, and behavior results whenever one or the other seems likely.

We have to presume that all of this stuff wires itself up through experience – by association. The first layer of sensory neurons learns when the sensor it is associated with is about to be touched, by understanding statistical relationships between the states of neighboring sensors. These first-level neurons presumably cooperate and compete with each other to ensure that each one develops a unique tuning and all possible circumstances get represented (this is exactly homologous, IMHO, to what happens in primary visual cortex, with edge-orientation/motion-sensitive neurons). The higher layers, which make longer-term predictions, learn to associate certain patterns of movement with pain or pleasure. The most abstract layers are presumably capable of learning that certain responses maximize pleasure or minimize pain.

Leaving aside the question of how these responses get coordinated, we now have a complete behavioral mechanism. And it’s NOT a stimulus-response system. The behavior is being triggered by predictions of what is about to happen, not what has just happened (this is a moot point and you may object that the system is still responding to the past stimuli, but I think an essential threshold has been crossed here and it’s fair to call this an anticipatory mechanism).

It’s clear that somehow the prediction needs to be compared to reality, and surprise should be generated if they don’t match, and it’s clear that predictions need to be able to associate themselves with reward. Somehow predictions also need to take part in servo action – actions are goal-directed, and hence are themselves predictions of a future state. Comparing what your sensors predict is going to happen, to what you intend to happen, is what allows you to make anticipatory changes and bring reality into line with your intentions. I need to think about that a bit, though.

But what about the ability to use this predictive mechanism to imagine possible futures? We presumably now have the facility to imagine a high-level construct, such as “let’s suppose I’m feeling someone stroke my skin” and actually feel the stroke occurring, as these higher-level neurons pass down their predictions to lower levels at which individual touch sensors are told to expect/pretend they’ve been stimulated. Although obviously this time we shouldn’t be surprised when nothing happens! The surprise response needs to be suppressed, and somehow the predictions ought to stand in for the sensations. That has implications for the wiring and all sorts of questions remain unresolved here.

It’s much harder, though, to see how we can assemble an entire context in our heads – the hand and the coffee cup, say. Coffee cups only fall when hands drop them. Dropping something only occurs when a hand is placed at a certain set of angles. A motor action is associated with a visual change, but only in a particular class of contexts, and the actual visual change is also highly context-dependent: If a cup was in your hand, that’s what you’ll see fall. Remarkably, if you imagine holding a little gnome in your hand instead, what you’ll see is a falling gnome, not a falling cup, even if you’ve never actually dropped a minuscule fantasy creature before in your life! In fact your imaginary gnome may even surprise you by leaping to safety! Somehow the properties of objects are able to interact in a highly generalizable way, and these interactions can trigger mental imagery, which eventually trickles down to the actual sensory system as if they’d really occurred (there are several lines of evidence to suggest that when we imagine something we “see” it using the same parts of our visual system that would be active if we’d really seen it).

Somehow the brain encodes cause and effect, at many levels, in a generalizable way. Complex chains of inference occur when we mentally decide to rotate our hand and see what happens to the thing it was holding, and the ability to make these inferences must arise from statistical learning that is designed to predict future states from past ones.

And somehow I have to come up with just such a general scheme, but at a level of abstraction suitable for a game. My creatures are not going to be covered in touch sensors or see the world in terms of moving colored pixels. It’s a shame really, because I understand these things at the low level – it’s the high level that still eludes me…

P.S. This post got auto-linked to a post on the question of why we can’t tickle ourselves (I’m assuming you’re not schizophrenic here, or you won’t know what I’m talking about, because you can!). We can’t tickle ourselves because our brain knows the difference between things we do and things that get done to us (self/non-self determination). If we try to tickle ourselves, we predict there will be a certain sensation and this prediction is used to cancel out the actual sensation. It’s pretty important for an organism to differentiate between things it does to the world and things the world does to it (bumping into something feels the same as being bumped into, but the appropriate responses are different). So here’s another pathway that requires anticipation, and another example of the brain as a simulation engine.

Advertisements

About stevegrand
I'm an independent AI and artificial life researcher, interested in oodles and oodles of things but especially the brain. And chocolate. I like chocolate too.

13 Responses to Brainstorm 6: All change

  1. Graham Glass says:

    Hi Steve,

    I think the scheme you describe applies uniformly to all kinds of associations, not just those that are temporal.

    Optical illusions are one example of this, where the brain sees a portion of a line and then fills in the blank spots based on geometric associations. This is something that is not related to temporal associations.

    One last thing re: temporal relationships; these work backwards as well as forwards. For example, if you see a lightning flash, your brain will tend to imagine what happened before as part of its mental model.

    Keep up the good work!

    Cheers,
    Graham

    • stevegrand says:

      Thanks Graham. Yes, that’s true (although maybe filling-in is itself a temporal prediction, internally?)

      That’s very true that you can back-project from an experience to the plausible causes. Yeesh, that adds another thing for me to think about!

  2. Graham Glass says:

    At an even more abstract level, the brain seems to strive for mental states that are both consistent and valued.

    The brain injects “now” states via sensory input as well as having mechanisms for generating possible states (a.k.a. imagination) in space and time.

    The states generated by your senses “anchor” your brain in reality, at least in the sense that any current/past/future states it represents should at least be internally consistent with the “now”.

    As far as I can tell, the brain treats the past, present and future *exactly* the same, and also strives for consistency in any kind of association (temporal or otherwise).

    As you can tell, I love this kind of stuff!

    Cheers,
    Graham

  3. bill topp says:

    you wonder at how the brain can anticipate the future because you can imagine the future so clearly for any one of a nearly infinite starting points. how can all these imagines possibly be stored, ready for use? perhaps you’re stumbling over the part of your brain that instantly anticipates and the part of your brain that leisurely imagines. it would not surprise me if anticipation is a good deal more primitive i.e. lacking in details and specifics than is imagination. for instance your near instant anticipation is probably the same for a) a rock hurtling towards your head and b) a tetherball hurtling towards your head. however your imagination would play these two instances very differently.

    once your fertile brain has time to start imagining the point of anticipation is past. you may not be anticipating nearly as much specific detail as your conscious mind believes. you anticipate and react to something hurtling towards you, then your brain begins to analyze what is likely to happen in the future. you have only one anticipation for the infinite number of possible things hurtling through the air aimed at your head.

    • stevegrand says:

      I think you’re right to an extent – there are anticipatory mechanisms in subcortical areas and even in the retina, and they’re very primitive, as you say. At the other extreme we have conscious, deliberate thought. But that must play out through more primitive systems. If we visualize something, our visual cortex is active, right down to V1 (at least one experiment has shown this). So in order to “see” something we choose to imagine, we clearly have to employ the circuitry that we use for real perception. And if we’re imagining something changing through time, then that must surely be making use of primitive associations. We don’t DEDUCE the fall of an imaginary cup as an abstract concept – we actually see it happen (well, some of us do – not everyone has a particularly visual mind). I can visualize whole machines, complete with working parts, and then imagine making some change to the machine. I SEE what happens as a result, so there must be a real working simulation of that machine in my head. If this process isn’t making use of the accumulated knowledge of how things move in space, and how one thing leads to another, that we acquired for other reasons – prediction and perception – then we’d have to have a whole other brain, just for imagining things with. But even then, we’d need a mass of quite primitive knowledge of cause and effect, right down to knowing what a moving edge looks like next, given how it looks now. So I agree that some of these anticipatory mechanisms are pure reflexes and inaccessible to thought, but I’m sure that most are available for use by higher levels in order to construct plans and speculations at leisure. The prefrontal cortex coordinates these simulations but it has to rely on smaller nuggets of knowledge that stretch right back through cortex to carry out the details. I’m pretty sure, anyway. My overall hunch here is that the brain’s ability to make low-level predictions is what made imagination (and therefore consciousness) possible.

  4. Rafael C.P. says:

    “actions are goal-directed, and hence are themselves predictions of a future state. Comparing what your sensors predict is going to happen, to what you intend to happen, is what allows you to make anticipatory changes and bring reality into line with your intentions.”

    Forward + Inverse models (Jordan & Rumelhart, 1992 – http://www.inf.ed.ac.uk/teaching/courses/mlsc/Notes/Lecture11/jordan-CS92.pdf)! This is half-way from imitation, because “actions” (goals) and perceptions are represented in the same way. This is part of “Common Coding Theory” and “Ideomotor Theory”. I think this may apply to almost your entire post and may help to concretize your ideas!

    • stevegrand says:

      Thanks. Yes, what I’m talking about is certainly ideomotor theory, at least in its psychological form (wasn’t it William James who first proposed the concept?).

      As for the connectionist literature, though, I’m rather luke-warm about all that. It’s a relevant paper, but you’ll have to forgive me if I pretty much ignore it and go on to naively reinvent wheels! What I’m interested in is WHOLE organisms, and my experience is that reduced problem domains beloved of connectionism rarely translate very well into artificial biology. There are too many other variables – too many practical twiddly bits that trip the nice, neat theories up or end up requiring special case because they don’t fit the core theory. If there are already theories out there that can be applied wholesale to the creation of artificial organisms that people can relate to and care about (which is my primary objective) then I’m probably wasting my time, but I think my requirements are rather unusual. So I prefer to think things through from first principles, rather than get bogged down in the literature. Is that arrogant of me? I do hope not. It is at least a luxury that I can allow myself, because I’m just a mortal games programmer and not an academic. But I have a real aversion to other people’s theories. I love other people’s data, but that’s different. Nevertheless, whatever I come up with, if anything, it’ll be nice to have some authorities I can cite! Thanks.

  5. Jason Holm says:

    “If I imagine myself talking to someone, we can have a complete conversation that matches the kinds of thing this person might say in reality – I have a comprehensive simulation of their own mind inside mine.”

    “Somehow the properties of objects are able to interact in a highly generalizable way, and these interactions can trigger mental imagery, which eventually trickles down to the actual sensory system as if they’d really occurred (there are several lines of evidence to suggest that when we imagine something we “see” it using the same parts of our visual system that would be active if we’d really seen it).”

    Why We Believe in Gods – Andy Thomson:

    • stevegrand says:

      Great talk! Thanks for that.

      All this talk of evolved modules is an interesting and subtle area. It’s clear that we do have circuitry that makes, e.g. empathy or attachment possible. It’s also clear that these facilities aren’t present in many of our ancestors and hence evolved. But genes make proteins, not circuits. Circuits come about because of the interaction between proteins and the environment. The developmental environment of humans is pretty consistent between individuals, therefore we all tend to develop similar mechanisms, prompted by genes. But this doesn’t mean that these circuits would arise in the absence of this developmental environment, and therefore they don’t necessarily require highly specialized wiring from the ground up – they’re emergent and can be encouraged to develop by quite small genetic influences. I just wanted to state a position on this, because a lot of my colleagues in biologically-inspired AI believe that the brain is purely a collection of specialized modules, each of which evolved independently. Therefore, they would argue that there is no generality to intelligence and no generality to the architecture of the brain (in flat contradiction to the cytoarchitectural evidence, which shows that the structure of cortex is remarkably consistent across its surface, and other structures show similar uniformity). Cortex is a highly generalized memory system, able to compute a huge variety of functions with a comparatively small amount of genetic nudging. I say this because some would say that the search for artificial general intelligence is pointless and we’re really a collection of highly specialized machines. Obviously I don’t agree. Andy Thomson talked about evolved modules but didn’t explicitly claim that these are genetically hard-wired. Some evolutionary psychologists do. It’s an interesting debate, but in the mean time I’m going to continue to search for generalizable intelligent architectures (while recognizing the need for some genetically-induced heterogeneity). Just thought I’d mention it!

      • Ben Turner says:

        Hi Steve – I also dislike the thought of the brain as a “collection of specialized modules”, and even more, the concept that they even COULD have evolved independently (obviously, we’re speaking here more specifically about cortex. There certainly are genetic and evolutionary distinctions in terms of function between, say, thalamus, cortex, striatum, cerebellum, etc. To a degree, these can be thought of as specialized modules that have evolved separately, although absolutely not independently–after all, how many theories posit that some human ancestor developed cortex, while some other had only the pituitary gland, and then they got together and whammo… This also isn’t to say that, especially given their co-evolution, these “older” structures don’t nonetheless serve substantially different purposes in humans than they did 10 million years ago in our common ancestor with chimps, nor that they aren’t plastic even over the course of a human lifetime).

        However, I wonder what your thoughts are on the fact that, although there is certainly a tremendous amount of variation, pretty much everyone’s brain looks the same. You could bring me a Kombai tribesman and I would bet a considerable sum of money that if you flash a checkered disk at him, I’ll see areas in the back part of his brain light up (he may also probably shoot you with a poison dart, so please don’t try this at home). Given the homogeneity in cytoarchitecture across the cortex (excluding for the moment things like piriform cortex, which does seem to be a bit off), why is it that everyone’s FFA is in more or less the same place? I don’t know much about genetics, honestly, so there may be a simple answer here, such as that when the glia guide neurons during development, they know to send the neurons with the face-recognizing genes to FFA. However, this view implying a neuron-specific genetic predisposition for faces–even aside from the debate about what FFA actually does–is deeply unsatisfying. But the alternative, that all cortex does more or less the same thing, and simply acquires functions driven by environment, does have this problem of explaining the relative homogeneity of brains across individuals.

        P.S. – I read this article at some point, although apparently the neurons responsible for remembering its details have been recycled… anyhow, it seems relevant: http://www.cell.com/neuron/retrieve/pii/S0896627307007593

      • stevegrand says:

        Hi Ben,

        Thanks – great observations, as always!

        I’ll take a stab at a couple of assertions for discussion:

        a) I’d say that everyone’s universe is extremely similar, whether you’re from London or the Trobriand Islands. Gravity works the same way, so does friction, so does your arm, so do objects in general. The environmental differences are pretty subtle in comparison. So if the brain is a self-organizing system then we’d expect each instance of it to draw much the same conclusions as it categorizes the world, at least at a gross level. Especially if our understanding of the world is hierarchical and develops from the sensorimotor world inwards.

        b) I don’t deny that genetics plays a big part, but genes nudge and cajole things into existence, taking advantage of the regularity of physical laws to provide much of the information, and I suspect some of this nudging need make only small changes to create big functional differences. A shift in neuromodulators, say. Adding some oxytocin-mediated receptors would totally change the reinforcement rules for a region and give it a predilection for learning statistical relationships relevant to bonding. Gross wiring of commissures and bundles would dictate which parts connect to which other, and hence determine what they were able to compute. The basic computation being performed would be similar throughout, but the input semantics and feedback would vary and hence the “function” would vary correspondingly.

        Is that enough to explain it?

      • stevegrand says:

        P.S. Thanks for the link. I just scanned Dehane and Cohen’s paper and I’d pretty much disagree with it. Not that I have any evidence, but I just have more faith in self-organizing systems than they do!

        I’d argue that, once you’ve enforced the retinotopic, tonotopic, somatotopic, etc. structure of primary sensory and motor cortex, and added in enforced mappings that may be developed elsewhere (in thalamic nuclei like the IC and SC, say), then you’ve pretty tightly constrained the eventual development of all maps, even “cultural” ones like number and letter-string representations.

        There could be any number of reasons why chimps can’t read or do arithmetic and we can. Just having enough real-estate in the right place, or an opportunistic piece of rewiring that brought two lower maps into communication would be enough. But once you have those connections and that spare cortical space, I feel sure there would be a powerful attractor determining what they end up being used for and how. Numbers are identified by vocalizations, auditory patterns, visual symbols, fingers, etc., so making use of them requires direct or suitable indirect connections to maps that can handle those things. Numbers also have sequence, and so we’d expect this to be reflected in their representation because they’re statistically more likely to be presented to us in sequences. How many different forms is this representation likely to take? Not many, I’d guess.

        I remember reading something by David Hubel once, marveling at how intricate the wiring of V1 is. But just because it ends up intricate, that doesn’t mean it has intricate blueprints.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: