Brainstorm #2

Ye Gods! I’d better get in quickly with a second installment – I’ve already written more words in replies to comments than there were in my first post. Thanks so much to all of you who have contributed comments already – I only posted it yesterday! I really appreciate it and I hope you’ll continue to add thoughts and observations.

Opening up my thought processes like this is a risky and sometimes painful thing to do, and I know from past experience that certain things tend to happen, so I’d like to make a few general observations to forestall any misunderstandings.

Firstly, I know a lot of you have your own ambitions, theories and hopes in this area, and I’ll do what I can to accommodate them or read your papers or whatever. But bear in mind that I can’t please everybody – I have to follow my own path. So if I don’t go in a direction you’d like me to go, I apologize. I’ll try to explain my reasoning but inevitably I’m going to have to make my own choices.

Secondly, I do this kind of work because I believe I have some worthwhile insights already. I’m not desperately looking for ideas or existing theories – the people who invented these ideas are perfectly welcome to write their own games. This is a tricky area, because I like it when someone says “have you thought of doing XXX?” but I’m not so interested in “have you seen YYY theory or ZZZ’s work?” I just don’t work that way – I prefer to think things through from first principles – and I’m writing this game largely to develop my own ideas, rather than with the pragmatic aim of writing a commercial application by bolting together other people’s.

Lastly, I invariably develop software alone. Nobody has offered to help or asked for this to be open source yet, but I know it’s coming. I don’t do collaborations. Collaborations have driven me crazy (and almost bankrupt) in the past. I know there are loads of people who would love to be part of a project like this, but all I can suggest is that you go off together and write one, because it’s not for me. I’m opening it up because I know people find it interesting and I wanted to share the design process, but I’m not interested in working on the actual code with others. It’s just not my thing.

Oh, and I do realize this is ambitious. I know it may not work. But I’m not as naive as I look, either. I’ve written four commercial games and at least a dozen commercial titles in other fields, so I’m pretty competent in terms of software development and product design. And I’ve been working in AI since the late 1970’s. Although it’s only my hobby, strictly speaking, I’m pretty well connected with the academic community and conversant with the state of the art. And I have an existence proof in Creatures, as long as you make allowances for the fact that I started writing it almost two decades ago. So don’t worry that I’m unwittingly being foolish and naive – I already know exactly how foolish I am!

Forgive me for saying these things up front – I really welcome and appreciate everybody’s support, thoughts, criticisms and general conversation. I just wanted to state a few ground rules, because it’s quite emotionally taxing to open up your innermost thought processes for inspection, and the provisional nature of everything can sometimes make it look like I’m floundering when really I’m just trucking along steadily.

Ok, so where to next? The features I mentioned yesterday were all aspects I’d like to see emerging from a common architecture. Jason admonished me to make sure I design a hierarchical brain, in which lower levels (equivalent to the thalamus and the brainstem) are fully functioning systems in their own right, and could be the complete brains of simpler animals as well as the evolutionary foundation for higher brain functions. I think this is important and a good point. The reptilian thalamus/limbic system probably works by manipulating more primitive reflexes in the brainstem. The cortex then unquestionably supervenes over the thalamus (for instance if we deliberately wish to look in a particular direction we quite probably do this by sending signals from the cortex (the frontal eye fields) to the superior colliculi of the thalamus, AS IF they were visual stimuli, thus causing the SC to carry out its normal unconscious duty of orienting the eyes towards a sudden movement). And finally, the prefrontal lobes of the cortex seem to supervene over an already functional set of subconscious impulses, motor and perceptual circuits in the rest of cortex, adding planning, the ability to defer reward, empathy and possibly subjective consciousness to the repertoire. So there are good reasons to follow this scheme myself.

But for now I’d like to think mostly about the cortical layer of the system. This is (perhaps) where memory plays the greatest role; where classification, categorization and generalization occur; and where prediction and the ability to generate simulations arises. I can assume that beneath this there are a bunch of reflexes and servoing subsystems that provide the outputs – I’ll worry about how to implement these later. But somehow I need to develop a coherent scheme for recognizing and classifying inputs and associating these with each other, both freely (as in “X reminds me of Y”) and causally (as in “if this is the trajectory that events have been taking, this is what I think will happen next”). Somehow these predictions need to iterate over time, so that the system can see into the future and ask “what if?” questions.

Let’s think about classification first. The ability to classify the world is crucial. It’s insufficient for intelligence, despite the huge number of neural nets, etc. that are nothing but classifier systems, but it’s necessary.

Here’s an assertion: let’s assume that the cortical surface is a map, such that, for any given permutation of sensory inputs, there will be a set of points on the surface that come to best represent that permutation.

It’s a set of points – a pattern – because I’m assuming this is a hierarchical system. If you hear a particular voice, a set of points of activity will light up in primary auditory cortex and elsewhere, representing the frequency spectrum of the voice, the time signature, the location, etc. Some other parts of auditory cortex will contain the best point to represent whose voice it is, based on those earlier points, or which word they just said. Other association areas deeper in the system will contain the points that best represent the combination of that person’s voice with their face, etc. Perhaps way off in the front there will be a point that best represents the entire current context – what’s going on. Other points in motor cortex represent things you might do about it, and they in turn will activate points lower down representing the muscle dispositions needed to carry out this action. So the brain will have a complex pattern of activation, but it’s reasonable to assert (I think) that EACH POINT ON THE CORTICAL SURFACE MAY BEST REPRESENT SOME GIVEN PERMUTATION OF INPUTS (INCLUDING CORTICAL ACTIVITY ELSEWHERE).

The cortex would therefore be a map of the state of the world. This is a neat assumption to work with, because it has several corollaries. For one thing, if the present state of the world is mapped out as such a pattern, then the future state, or the totally imagined state, or the intended state of the world can simultaneously be mapped out on the same real estate (perhaps using different cells in the same cortical columns). Having such a map allows the brain to specify world state in a variety of ways for a variety of reasons: sensation, perception, anticipation, intention, imagination and attention. Each is a kind of layer on the map, and they can be presumed to interact. So, for instance, the present state and recent past states give rise to the anticipated future state, via memories of probability derived from experience. Or attention can be guided by the sensory map and used to filter the perceptual or motor maps.

A second corollary might be that SIMILAR PERMUTATIONS TEND TO BE BEST REPRESENTED BY CLOSE NEIGHBORS. If this is true, then the system can generalize, simply by having some fuzziness in the neural activity pattern. If we experience a novel situation, it will give rise to activity centered over a unique point, but this point is close to other points representing similar, perhaps previously experienced situations. If we know how to react to them, we can guess that this is the best response to the novel situation too, and we can make use of this knowledge simply by stimulating all the points around the novel one.

When I say these are points on the cortical surface, I mean there will be an optimum point for each permutation, but the actual activity will be much more broad. I have a strong feeling that the brain works in a very convolved way – any given input pattern will activate huge swathes of neurons, but some more than others, such that the “center of gravity” of the activity is over the appropriate optimum point. I showed with Lucy that such large domes of activity can be used for both servoing and coordinate transforms (e.g. to orient the eyes and head towards a stimulus depending on where it is in the retinal field – a transform from retinal to head-centered coordinates). Smearing out the activity in this way also permits generalization, as above. But it’s a bummer to think about, because everything’s blurry and holographic!

I have some nagging issues about all this but for now I’ll run with it. It’s a neat mechanism, and if biology doesn’t work this way then it damn well ought! It’s a good starting point, anyway. Lots of things fall out of it.

And I already have a mechanism that works for the self-organization of primary visual cortex and may be more generally applicable to this “classification by mapping” scheme. But that, and some questions and observations about categories and the collapse of phase space, can wait for next time!

EDIT: Just a little footnote on veracity: I like to be inspired by biology but this doesn’t mean I follow it slavishly. So if I assert that perhaps the cortex acts like a series of overlaid maps, I’ll have done so because it’s plausible and there’s some supportive evidence. But please remember that this is an engineering project – I’m not saying the cortex DOES work like this; only that it’s reasonably consistent with the facts and provides a useful hunch for designing an artificial brain. It’s a way of inventing, not discovering. So sometimes I say cortex and mean the real thing, and sometimes I’m talking about my hypothetical engineered one. I ought to use inverted commas really, but I hope you’ll infer the distinction.

About stevegrand
I'm an independent AI and artificial life researcher, interested in oodles and oodles of things but especially the brain. And chocolate. I like chocolate too.

8 Responses to Brainstorm #2

  1. Jason Holm says:

    This reminds me of a diagram I drew up a few weeks ago after listening to a TED Talk (I think) about theories of mind:

    1. You have a box. Ignoring discussions of quantum mechanics and Schrodinger’s Cat, there is something precise in the box, be it an object, air molecules, or a vacuum. This is reality.

    2. Lower animals have little to no thought as to what is in the box. They can’t see it, and unless they have experience with similar boxes in the past, they have no reason to bother with the box.

    3. Higher animals have abstract thinking and curiosity — it doesn’t really matter what is in the box for real, because they have their own opinions on what is in the box — that model of the world in their mind on which they act. Opening the box may confirm or refute their opinion with reality, but even then, there are people who still believe their own perception of things even after seeing the real world.

    4. Now we’re in human territory, the area that folks like my autistic son have issues with — understanding that along with what is REALLY in the box, and what we THINK is in the box, that other people may have a completely different THIRD opinion on what is in the box. That understanding that other individuals carry their own model of the world separate from our own, and separate from reality.

    5. Finally, there’s that area that some chimps and most humans have – knowing that other people have models in their head which include possible models of other people’s heads within it. It’s the whole reason dramatic stories work — we see the main character on screen, and the character sees a guy and girl interacting, and based on what she sees and thinks, she adjusts her actions. “Maybe the other girl is fed up with the guy and will leave him, so I need to act fast to catch him before some other girl does.” And we as the audience understand this opinion (“I’m sick of my boyfriend”) within an opinion (“I bet she’s sick of her boyfriend”) is within OUR opinion (“I bet she thinks the other girl is sick of her boyfriend and will try to move in”). That whole crazy layer upon layer of brain models and how it influences social actions…

    I wonder how Facade handled all that? They only had three characters to deal with (Player included) and it still took forever to process… I’m curious how far you’d be able to take this “simulated copy of the world in the head” thing — everyone talks about it but I’ve never seen anyone do it beyond a list of variables like “can I see food from where I am? Where was the last place I saw food? In all the places I’ve ever seen food, which one is the closest? How long ago did I see food in that closest place?” and then run all the logic off that.

    Good luck!

    • stevegrand says:

      That’s a fascinating topic. I have absolutely no idea how far I can go with that, but whatever happens, it will have to fall out of the neural structure and not be superimposed. The first step is figuring out how someone else’s external state can be mapped into egocentric coordinates (cf. mirror neurons). Coordinate transforms seem to me to be a recurring feature of mentation in general, and transforming someone else’s world into your space so that you can (knowingly) experience it and interpret it as they do ought just to be an extension of this general facility, when applied to the right parts of the brain. A topic for a later post, I suspect! I’d like to think about coordinate spaces as a topic in itself.

  2. Terren says:

    Hey Steve,

    First, I commend your bravery in opening up your still-evolving thoughts to the public. Hopefully it’s a win-win and not soul-crushing. :-]

    The fuzzy clustering aspect of cortical mapping reminded me of an interesting phenemenon… foot fetishes. From Wikipedia ( “Neurologist Vilayanur S. Ramachandran proposed that foot fetishism is caused by the feet and the genitals occupying adjacent areas of the somatosensory cortex, possibly entailing some neural crosstalk between the two.” Gotta love Ramachandran… that guy is pure genius.

    One thing I am confused by is your proposition about attention… i.e. “Or attention can be guided by the sensory map and used to filter the perceptual or motor maps.” The way intention, anticipation, imagination, and the others fall out of your cortex ideas seems straightforward (in principle) but attention seems like a different beast to me. Actually what you’re saying makes sense to me but only if I reduce ‘attention’ to ‘stimulus filter’. Is that all you mean? I think my notion of attention is more ‘meta’ and would demand some explanation beyond mere filtering.


    • stevegrand says:

      Ramachandran did some nice work on number synesthesia, showing that people who see sequences of numbers as forming a particular shape really do have such a representation in their brain (they can perform calculations between numbers that happen to lie on adjacent parts of their internal shape faster than can be accounted for by their cardinal relationship). The foot fetish thing would fit nicely with that. What a fascinating thought! Thanks for that.

      Hmm, yes, I mentioned attention only in passing and it’s something I need to put a lot more (ahem!) attention into later. I think “filter” is ok, but I don’t mean to imply that it’s merely a sensory filter. Remember that this hypothetical representation map doesn’t just cover the sensory state of the world. This is something I should have pointed out but forgot: when I say the cortex may be a map of the state of the world, I mean the state of the world INCLUDING the state of the organism itself. Points on the surface would form best representations for the creature’s intentions and actions, as well as its perceptions (in secondary and primary motor cortex respectively). I see the prefrontal cortex as operating with a map of maps – such that it’s “sensory” input is the state of the rest of the cortex, rather than the state of the senses. So the pattern of activity across the cortex as a whole represents ALL that is known and knowable about the state of the sensory environment AND the creatures internal state (and I appreciate that this is an interestingly recursive statement!). Attention is THEREFORE a filter applied to this map. Any possible kind of modulation of the creature’s sensations, perceptions, intentions, beliefs, concerns… anything you can say about the creature’s mental state MUST be representable as a pattern in this space, which modulates the underlying pattern. No matter how you define attention, it will play out as a pattern of modulation on this map. So I suspect that the more “meta” aspects of attention are just more anterior than posterior (and hence more abstract) in their effects.

      Can you think of any forms of attention for which this wouldn’t be true? Do you think attention requires several fundamentally different mechanisms? Obviously, focusing one’s visual attention on an object is (usually) carried out by swiveling the eyes, and that’s handled in a specialized way. But the thing that CAUSES the eyes to be directed to a particular point on the visual field quite probably involves modulating the pattern of intensity on a cortical map that is arranged visuotopically. Focusing one’s attention on a mental task sounds radically different, but I suggest it still involves suppressing and enhancing certain areas of cortex.

      • Terren says:

        Right, if I understand you correctly, you’re addressing my ‘meta’ concerns by pointing out that the maps that ‘attention’ acts on are sufficiently inclusive of all possible mental maps and operations – beyond simply sensory/motor maps. And that’s a good answer, especially in the sense that that’s what this post is about.

        My concerns with attention, upon further reflection, probably go beyond the scope of this blog post. When you talk about perception, anticipation, imagination, and so on, these correspond roughly to what we might call “processing modalities” – each one generates/contributes its own kind of subjective reality. Attention, characterized more as a filter, is not generative of anything. Attention reflects which of these modalities is current, and what the focus of the processing is.

        I think you are just saying that the cortical maps, interconnected as they are, support those ‘generative’ modes, as well as specifying a domain for the filtering activities of attention.

        Beyond that, my concerns relate how the brain controls attention. I don’t worry about how the brain controls, say, anticipation because we can surely say that the cortex responsible for anticipation is always active, whether or not it is the focus of the attention. It can’t not be active, in the same way we can’t not see things when our eyes are open. But attention is not like this. Unless, you are suggesting there is a part of the cortex devoted to focusing the attention… in which case we *could* call attention “generative” in the sense that it generates the current filter or focus of the subjective reality.

        I think such an “attention cortex” would correspond very easily with a homunculus – it would be the neural correlate of the Cartesian Theater, and of free will. I think something like that would have shown up by now in brain imaging experiments. Also, there are clearly times when the attention is focused from the bottom-up, like when you touch a hot stove. So I don’t think attention works like this.

        If there is no “attention cortex” then it remains to be seen (to me anyway) what, if anything, controls the attention, and how. Or is attention simply a description of whatever happens to be active in our brain? Are we biased to think of attention as a process (rather than merely a description) because we feel we can control it?

      • stevegrand says:

        I’m a bit late replying to this one Terren, sorry.

        > If there is no “attention cortex” then it remains to be seen (to me anyway) what, if anything, controls the attention, and how.

        It remains to be seen to me, too! I think it’s fair to say that much of the top-down control of attention occurs in the prefrontal cortex. If there’s a homunculus hiding anywhere, this is it. But of course it’s easy to wave my hands about this – I have to show HOW the PFC controls attention.

        It seems a reasonable hypothesis that the PFC is to the rest of cortex what the cortex is to the sensorimotor system. Perhaps the “sensory” data coming into the PFC is information about the state of cortex, and the “motor” outputs of the PFC modulate the rest of cortical activity. The PFC may contain a map of cortex, just as cortex contains a map of the retina. Certainly dysfunction of the PFC leads to the release of impulsive, reckless, not-always-conscious and yet often highly sophisticated behaviors.

        Perhaps top-down control of attention is one form (or even the only form) of such motor output? If the PFC supervenes over the various cortical impulses (which would otherwise act autonomously) by suppressing some actions and enhancing others, then perhaps it can supervene over the cortex’s perceptual systems as well?

        Of course the PFC is cortex too – it’s not like it’s a fundamentally distinct system – so maybe this executive control is really just the same thing that is happening throughout cortex, and only appears different because in the PFC it isn’t specialized for one modality (unless you call social behavior a modality). So perhaps MANY parts of cortex are able to modulate the activity in other cortical regions, and hence the cortex essentially controls its OWN attention?

        If our complex thoughts suggest that we should pay attention to something then we’ll “deliberately” shift our eyes or highlight some region of a cortical map (e.g. become attentive to what we know about recipes, because we’re trying to think of something to have for dinner). But if we catch a movement out of the corner of our eye then the thalamus will shift our attention by driving a saccade, long before we’re consciously aware of it. We may post-rationalize this as “I saw something move and decided to look to see what it was” but in truth we weren’t even consulted and the attention shift happened bottom-up. In-between these extremes there would be subconscious attentional shifts, for instance listening out for bird song because we saw something flutter and this will confirm its identity.

  3. bill topp says:

    mythical man month by frederick brooks

    • stevegrand says:

      Yeah, been there; seen that; got the missed deadline! I think it’s notable that computer game code has maybe only tripled in size since I started in the business, while development teams have expanded by a factor of 30 or more. Meanwhile, development schedules have actually got longer. If one man can dig a hole in one day, two men can dig it in a week.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: