Mappa Psyche

I’m kind of feeling my way, here, trying to work out how to explain a lifetime of treading my own path, and the comments to yesterday’s post have shown me just how far apart we all wander in our conceptual journey through life. It’s difficult even to come to shared definitions of terms, let alone shared concepts. But such metaphors as ‘paths’ and ‘journeys’ are actually quite apt, so I thought I’d talk a little about the most important travel metaphor by far that underlies the work I’m doing: the idea of a map.

This is trivial stuff. It’s obvious. BUT, the art of philosophy is to state the blindingly obvious (or at least, after someone has actually stated it, everyone thinks “well that’s just blindingly obvious; I could have thought of that”), so don’t just assume that because it’s obvious it’s not profound!

So, imagine a map – not a road atlas but a topographical map, with contours. A map is a model of the world. It isn’t a copy of the world, because the contours don’t actually go up and down and the map isn’t made from soil and rock. It’s a representation of the world, and it’s a representation with some crucial and useful correspondences to the world.

To highlight this, think of a metro map instead, for a moment. I think the London Underground map was the first to do this. A metro map is a model of the rail network, but unlike a topographic map it corresponds to that network only in one way – stations that are connected by lines on the map are connected by rails underground. In every other respect the map is a lie. I’m not the only person to have found this out the hard way, by wanting to go from station A to station B and spending an hour travelling the Tube and changing lines, only to discover when I got back to the surface that station B was right across the street from station A! A metro map is an abstract representation of connectivity and serves its purpose very well, but it wouldn’t be much use for navigating above ground.

A topographical map corresponds to space in a much more direct way. If you walk east from where you are, you’ll end up at a point on the map that is to the right of the point representing where you started. Both kinds of map are maps, obviously, but they differ in how the world is mapped onto them. Different kinds of mapping have different uses, but the important point here is that both retain some useful information about how the world works. A map is not just a description of a place, it’s also a description of the laws of geometry (or in the case of metro maps, topology). In the physical world we know that it’s not possible to move from A to B without passing through the points in-between, and this fact is represented in topographical maps, too. Similarly, if a map’s contours suddenly become very close together, we know that in the real world we’ll find a cliff at this point, because the contours are expressing a fact about gradients.

So a map is a model of how the world actually functions, albeit at such a basic level that it might not even occur to you that you once had to learn these truths for yourself, by observation and trial-and-error. It’s not just a static representation of the world as it is, it also encodes vital truths about how one can or can’t get from one place to another.

And of course someone has to make it. Actually moving around on the earth and making observations of what you can see allows you to build a map of your experiences. “I walked around this corner and I saw a hill over there, so I shall record it on my map.” A map is a memory.

Many of the earliest maps we know of have big gaps where knowledge didn’t exist, or vague statements like “here be dragons”. And many of them are badly distorted, partly because people weren’t able to do accurate surveys, and partly because the utility of n:1 mapping hadn’t completely crystallized in people’s minds yet (in much the same way that early medieval drawings tend to show important people as larger than unimportant ones). So maps can be incomplete, inaccurate and misguided, just like memories, but they still have utility and can be further honed over time.

Okay, so a map is a description of the nature of the world. Now imagine a point or a marker on this map, representing where you are currently standing. This point represents a fact about the current state of the world. The geography is relatively fixed, but the point can move across it. Without the map, the point means nothing; without the point, the map is irrelevant. The two are deeply interrelated.

A map enables a point to represent a state. But it also describes how that state may change over time. If the point is just west of a high cliff face, you know you can’t walk east in real life. If you’re currently at the bottom-left of the map you know you aren’t going to suddenly find yourself at the top-right without having passed through a connected series of points in-between. Maps describe possible state transitions, although I’m cagey about using that term, because these are not digital state transitions, so if you’re a computery person, don’t allow your mind to leap straight to abstractions like state tables and Hidden Markov Models!

And now, here’s the blindingly obvious but really, really important fact: If a point can represent the current state of the world, then another point can represent a future state of the world; perhaps a goal state – a destination. The map then contains the information we need in order to get us from where we are to where we want to go.

Alternatively, remembering that we were once at point A and then later found ourselves at point B, enables us to draw the intervening map. If we wander around at random we can draw the map from our experiences, until we no longer have to wander at random; we know how to get from where we are to where we want to go. The map has learned.

Not only do we know how to get from where we are to where we want to go, but we also know something about where we are likely to end up next – the map permits us to make predictions. Furthermore, we can contemplate a future point on the map and consider ways to get there, or look at the direction in which we are heading and decide whether we like the look of where we’re likely to end up. Or we can mark a hazard that we want to avoid – “Uh-oh, there be dragons!”. In each case, we are using points on the map to represent a) our current state, and b) states that could exist but aren’t currently true – in other words, imaginary states. These may be states to seek, to avoid or otherwise pay attention to, or they might just be speculative states, as in “thinking about where to go on vacation”, or “looking for interesting places”, or even simply “dropping a pin in the map, blindfold.” They can also represent temporarily useful past states, such as “where I left my car.” The map then tells us how the world works in relation to our current state, and therefore how this relates functionally to one of these imagined states.

By now I imagine you can see some important correspondences – some mappings – between my metaphor and the nature of intelligence. Before you start thinking “well that’s blindingly obvious, I want my money back”, there’s a lot more to my theories than this, and you shouldn’t take the metaphor too literally. To turn this idea into a functioning brain we have to think about multiple maps; patterns and surfaces rather than points; map-to-map transformations with direct biological significance; much more abstract coordinate spaces; functional and perceptual categorization; non-physical semantics for points, such as symbols; morphs and frame intersections; neural mechanisms by which routes can be found and maps can be assembled and optimized… Turning this metaphor into a real thinking being is harder than it looks – it certainly took me by surprise! But I just wanted to give you a basic analogy for what I’m building, so that you have something to place in your own imagination. By the way, I hesitate to mention this, but analogies are maps too!

I hope this helps. I’ll probably leave it to sink in for a while, at least as far as this blog is concerned, and start to fill in the details later, ready for my backers as promised. I really should be programming!

Introduction to an artificial mind

I don’t want to get technical right now, but I thought I’d write a little introduction to what I’m actually trying to do in my Grandroids project. Or perhaps what I’m not trying to do. For instance, a few people have asked me whether I’ll be using neural networks, and yes, I will be, but very probably not of the kind you’re expecting.

When I wrote Creatures I had to solve some fairly tricky problems that few people had thought much about before. Neural networks have been around for a long time, but they’re generally used in very stylized contexts, to recognize and classify patterns. Trying to create a creature that can interact with the world in real-time and in a natural way is a very different matter. For example, a number of researchers have used what are called randomly recurrent networks to evolve simple creatures that can live in specialized environments, but mine was a rather different problem. I wanted people to care about their norns and have some fun interacting with them. I didn’t expect people to sit around passively watching hundreds of successive generations of norns blundering around the landscape, in the hope that one would finally evolve the ability not to bump into things.

Norns had to learn during their own lifetimes, and they had to do so while they were actively living out their lives, not during a special training session. They also had to learn in a fairly realistic manner in a rich environment. They needed short- and long-term memories for this, and mechanisms to ensure that they didn’t waste neural real-estate on things that later would turn out not to be worth knowing. And they needed instincts to get them started, which was a bit of a problem because this instinct mechanism still had to work, even if the brains of later generations of norns had evolved beyond recognition. All of these were tricky challenges and it required a little ingenuity to make an artificial brain that was up to the task.

So at one level I was reasonably happy with what I’d developed, even though norns are not exactly the brightest sparks on the planet. At least it worked, and I hadn’t spent five years working for nothing. But at another level I was embarrassed and deeply frustrated. Norns learn, they generalize from their past to help them deal with novel situations, and they react intelligently to stimuli. BUT THEY DON’T THINK.

It may not be immediately obvious what the difference is between thinking and reacting, because we’re rarely aware of ourselves when we’re not thinking and yet at the same time we don’t necessarily pay much attention to our thoughts. In fact the idea that animals have thoughts at all (with the notable exception of us, of course, because we all know how special we are) became something of a taboo concept in psychology. Behaviorism started with the fairly defensible observation that we can’t directly study mental states, and so we should focus our attention solely on the inputs and outputs. We should think of the brain as a black box that somehow connects inputs (stimuli) with outputs (actions), and pay no attention to intention, because that was hidden from us. The problem was that this led to a kind of dogma that still exists to some extent today, especially in behavioral psychology. Just because we can’t see animals’ intentions and other mental states, this doesn’t mean they don’t have any, and yet many psychological and neurological models have been designed on this very assumption. Including the vast bulk of neural networks.

But that’s not what it’s like inside my head, and I’m sure you feel the same way about yours. I don’t sit here passively waiting for a stimulus to arrive, and then just react to it automatically, on the basis of a learned reflex. Sometimes I do, but not always by any means. Most of the time I have thoughts going through my mind. I’m watching what’s going on and trying to interpret it in the light of the present context. I’m worrying about things, wondering about things, making plans, exploring possibilities, hoping for things, fearing things, daydreaming, inventing artificial brains…

Thinking is not reacting. A thought is not a learned reflex. But nor is it some kind of algorithm or logical deduction. This is another common misapprehension, both within AI and among the general public. Sometimes, thinking equates to reasoning, but not most of the time. How often do you actually form and test logical propositions in your head? About as often as you perform formal mathematics, probably. And yet artificial intelligence was founded largely on the assumption that thinking is reasoning, and reasoning is the logical application of knowledge. Computers are logical machines, and they were invented by extrapolation from what people (or rather mathematicians, which explains a lot) thought the human mind was like. That’s why we talk about a computer’s memory, instructions, rules, etc. But in truth there is no algorithm for thought.

So a thought is not a simple learned reflex, and it’s not a logical algorithm. But what is it? How do the neurons in the brain actually implement an idea or a hope? What is the physical manifestation of an expectation or a worry? Where does it store dreams? Why do we have dreams? These are some of the questions I’ve been asking myself for the past 15 years or so. And that’s what I want to explore in this project. Not blindly, I should add – it’s not like I’m sitting here today thinking how cool it will be to start coming up with ideas. I already have ideas; quite specific ones. There are gaps yet, but I’m confident enough to stick my neck out and say that I have a fair idea what I’m doing.

Explaining how my theories work and what that means for the design of neural networks that can think, are things that will take some explaining. But for now I just wanted to let you know the key element of this project. My new creatures will certainly be capable of evolving, but evolution is not what makes them intelligent and it’s not the focus of the game. They’ll certainly have neural network brains, but nothing you may have learned about neural networks is likely to help you imagine what they’re going to be like; in fact it may put you at a disadvantage! The central idea I’m exploring is mental imagery in its broadest sense – the ability for a virtual creature to visualize a state of the world that doesn’t actually exist at that moment. I think there are several important reasons why such a mechanism evolved, and this gives us clues about how it might be implemented. Incidentally, consciousness is one of the consequences. I’m not saying my creatures will be conscious in any meaningful way, just that without imagery consciousness is not possible. In fact without imagery a lot of the things that AI has been searching for are not possible.

So, in short, this is a project to implement imagination using virtual neurons. It’s a rather different way of thinking about artificial intelligence, I think, and it’s going to be a struggle to describe it, but from a user perspective I think it makes for creatures that you can genuinely engage with. When they look at you, there will hopefully be someone behind their eyes in a way that wasn’t true for norns.

Is the human brain still in beta?

Or is it society that’s not yet fully debugged?

I’m supposed to be working hard at the moment, which is, of course, why I’m spending far too much time on Facebook. Anyway, yesterday and today a series of disparate Facebook threads seemed to come together as if to raise a single question, so I thought I’d ask for opinions.

1. There was this obscenely stupid video by Rick Barber, a Republican congressional candidate. The message of the video is that a) social welfare requires working people to pay taxes; b) being required to do something is tantamount to being enslaved; c) slavery is a bad thing; therefore d) social welfare is a bad thing and e) people (who look, in the video, remarkably like mindless zombies) should rise up like an army against it. Brilliant! The man is a syllogistic genius! My question is, what possible circumstances would conspire to make someone, who’s presumably at least capable of tying his own shoelaces unaided, think that this was a reasonable and defensible position on which to base a political campaign? Where was he and what was he doing at the moment when this pathetic, absurd and infantile idea actually started to seem like a good one? Did someone put him up to it or was the stupidity all his own? Did he fall foul of circumstances or was he pushed?

2. The British enquiry into the Iraq war has been told by a diplomat that he believes the government deliberately exaggerated claims about weapons of mass destruction. We kind of knew that already, after the famous “dossier” was released a few years ago. Understandably, some of my friends are thus calling for justice against Blair and Bush for deliberately starting a war. I’ve heard a number of explanations for why our leaders are supposed to have done this, generally focused around oil and international economics. In the abstract I can accept that the modern military/industrial complex might be what ’caused’ the war in Iraq, but I find it very hard to believe that two intelligent (well, let me rephrase that: one intelligent), educated, family men, and their entire governments, would sit down one day and say to themselves “Hey, if only we declared war on Iraq we might get what we want.” Do reasonable people REALLY decide to cause the deaths of tens of thousands of innocents, just to further their own sinister aims, or even the legitimate aims of the country they represent? Politicians do seem to tend towards having psycopathic or at least narcissistic personalities, but are they really that dysfunctional? I doubt it. I’m sure Blair and even Bush felt they had little choice, under the circumstances. They problem is, they lied about the circumstances, so we can’t imagine where they were and what they were doing when this pathetic, absurd and infantile idea actually started to seem like a good one.

3. The oh so inappropriately named English Defence League is apparently on the march, stirring up racial hatred. Racial strife in a multicultural country is a genuine issue, but to what extent, on both sides, is this the result of deliberate decisions? In a largely Muslim neighborhood, people will, quite naturally, tend to behave like Muslims. I don’t suppose they do it to offend – they’re just responding to their context. Meanwhile, during a late night pub crawl, stupid white youths will, quite naturally, tend to behave like jerks. Under those circumstances of mutually-reinforcing opinion, it’s easy enough to see how anti-Muslim (or indeed anti-anything) rhetoric can escalate into the conviction that violence and abuse are somehow “good” responses. Did they do this of their own accord or were they “encouraged”? If the latter, by whom and why? And what in turn caused these shadowy figures to hold their views?

4. Oh, and I might as well include a couple of nice ladies who just knocked on my door and tried to tell me that they’ve based their entire emotional and intellectual (not to mention moral and ethical) lives on the belief that their book – the Book of Mormon – is the fount of all wisdom, because it was transcribed in 1830 by Joseph Smith Jr. from golden plates given him by an angel, incorporating the 3,000-year history of a tribe of Native Americans who were, as if any of this sounds even remotely plausible, followers of Jesus Christ. To be honest it would be easier and far more reasonable to found a religion on the works of J.R.R. Tolkein. They were sweet girls who didn’t really seem to know much about the details that underlay this belief. All they knew was that it was true and they should believe it, whatever the actual facts might be. In fact they reminded me of the Electric Monk from my favorite book, Dirk Gently’s Holistic Detective agency. So in this instance I feel more comfortable drawing the conclusion that they believe what they believe, simply because they grew up in circumstances where, well, that’s what you believe, isn’t it? It’s not that they’re particularly dim, just victims of circumstance. And I don’t suppose they do much harm.

But my general question is this: how BAD are people, really? I honestly don’t know. My own faith in human nature has been shaken somewhat, these past few years. Not that I believe people are inherently bad, just that they don’t always act rationally. You knew that, of course, but I guess I didn’t really believe it. I’m so naive. But what actually happens to make a politician decide that looking after his fellow man is somehow a crime? What happens to make an educated, intelligent, socialist leader decide to ally with his political opposite and sentence thousands to death? What actual circumstances convince a bunch of louts that they’re crusading for a noble cause by throwing bricks at people in turbans? What, in turn, overcomes the masterminds that surely lie behind this (and behind Bush, etc.), such that they come to believe in their own cause? Or do they?

It’s easy to be glib, lean on the bar and simply say that politicians, etc. are greedy psychopaths, but surely the truth is that they either find themselves trapped in a position where they have no option, or they believe they’re trapped in a position where they have no option, because somehow things have conspired to distort their perspective? Is evil intent really a property of social systems, not individuals? Did Saddam genuinely believe he was good for his people, for instance? After all, he was holding an artificial and rebellious collection of tribes together in some sort of productive unity, albeit with an iron grip. Was it the construction of Iraq that created Saddam? Was it the military/industrial complex as an entity in its own right (as opposed to individual people within it) that forced Bush and Blair into a situation where war became inevitable? Bush and Blair were the hub of the situation: they alone had the power to start or stop the war, in theory, so they have to take much of the responsibility for it. But did they actually have the opportunity to prevent conflict? We just don’t know, because they lied about it so much that we can’t yet see the sequence of events which might have made them feel they were taking the right action. Perhaps they were just as hoodwinked by circumstances as the girls from the Church of Latter Day Saints, who I doubt would have believed a single word Joseph Smith said, if they’d ever been given a chance to look at the evidence without first being brainwashed by the environment in which they grew up.

Or are politicians really immoral, amoral or indeed mentally ill? Most people I’ve talked to are firmly of the opinion that politicians and businessmen are, in general, motivated purely and knowingly by greed. Certainly narcissism is a perfect qualification for anyone who wants to succeed in politics. Most people think Hitler was a psychopath, and the evidence is supportive. In fact most people seem to think most leaders are psychopaths, or at least greedy and narcissistic. And yet we still vote for them – is that because the only other candidates are just as bad?

Another thread I wanted to bring into this was a documentary I watched last night, about fetishes and sadomasochism. Apart from the two women, who had their own reasons, all the clients interviewed at this S&M brothel were bankers or CEOs. There were probably politicians, too, but they presumably had more sense than to go on camera stark naked, on all fours, wearing bondage gear. All of them had serious issues about control, stretching back into childhood. In general they seemed desperately to need severe doses of submissiveness in order somehow to balance the domination that they exert in their day jobs. They craved the chance to be slaves and paid good money to be humiliated. If their evening activities were any guide at all to their daytime ones then nothing they do should be regarded as rational or moderate, poor devils.

So is the truth a composite of my two hypotheses? Are people in power genuinely corrupt and self-serving, but only because the System itself conspires to make this happen? Have we got ourselves into a situation in which corruption is self-sustaining and successful? If so, perhaps we are doing the wrong thing by holding the individuals responsible. Perhaps that just distracts us from the real culprit and satisfies our innate need to embody something that’s really incorporate. People who are three feet tall tend to end up in the movie business more often than basketball. Similarly, some poor suckers are the victims of childhood abuse, domineering fathers or whatever, and end up as politicians and bankers, because that’s what their neuroses and psychoses best suit them for.  They happen to be deranged in just the right way to make them ruthless and hence successful businessmen, or self-centered, corruptible politicians. And then we vote them in, or buy their products, or lend them our money, because we, too, feel we have no choice. I guess that makes us just as culpable as them, or them just as innocent as us.

Let me finish with one last Facebook post. This one was a link to a robotics project that is clearly funded by, and heavily tailored towards, the Military. The research team is developing robot helicopters that can fly through windows and latch onto a target. It doesn’t take much imagination to see what military applications this might have, and those applications are potentially very destabilizing, because they provide the opportunity to blow people up at zero risk to the person who chooses to do it. Warfare evolved under fairer circumstances than these and we really don’t know what will happen when wars can be fought from an armchair. Now, quite a large proportion of robotics research is actually funded by the Military – without that funding the field of intelligent robotics probably wouldn’t exist. Do the researchers have qualms about the intended applications of their work or where their money comes from? I sincerely hope and assume so. Are they going to stop? I doubt it. They have good motives, and this is the only way they feel they can make progress with them. They justify it to themselves. I’ve been there – I know how easy it is to turn a blind eye to your own misgivings, or assume it’s someone else’s problem. I don’t do that kind of work, but then I don’t have a job either. That may well be the price people would have to pay. And so it goes: Innocent, well-meaning people do things that could have terrible consequences, because, well, because if they don’t do it someone else is going to, aren’t they, and that will be worse. The system conspires to make swords instead of plowshares, and yet everyone’s just doing their best under the circumstances.

It’s a problem.

Brainstorm 7: How long is now?

I worry too much. I live too far into the future; always so acutely aware of the potential distant knock-on effects of my actions that I’m sometimes quite paralyzed. On the downside this can be a real handicap, but on the upside it means I’m intelligent, because seeing into the future is what intelligence is for. But how? And how do we differentiate between past, present and future? What do we really mean by “now”?

My main thesis for this project is that the brain is a prediction machine. In other words I think it takes so long for nerve signals to reach the brain and be analyzed by it (you may be surprised to know it takes about a tenth of a second merely for signals to reach the primary visual cortex from the retina, never mind be turned into understanding), that we’d be dead by now if it weren’t for our ability to create a simulation of the world and run it ahead of time, so that we are ready for what’s about to happen instead of always reacting to what has already happened. I’m suggesting that this simulation ability derives, at least in part, from a capacity to make small predictions based on experience, at ALL levels of the nervous system. These little fragments of “if this happens then I suspect that will happen next” are there to counter processing delays and reaction times, and give us the ability to anticipate. But they also (I suggest) provide the building blocks for other, more interesting things: a) our ability to create a contextual understanding of the world – a stable sense of what is happening; b) our ability to form plans, by assembling sequences of little predictions that will get us from a starting state to a goal state; and c) our capacity for imagination, by allowing us to link up sequences of cause and effect in an open-ended way. The capacity for imagination, in turn, is what allows us to be creative and provides the virtual world in which consciousness arises and free thought (unconstrained by external events) can occur.

I rather think some clever tricks are involved, most especially the ability to form analog models of reality, as opposed to simple chains of IF/THEN statements, and the ability to generalize from one set of experiences to similar ones that have never been experienced (even to the extent that we can use analogies and metaphors to help us reason about things we don’t directly understand). But I’d say that the root of the mechanism lies in simple statistical associations between what is happening now and what usually happens next.

So let’s look at a wiring diagram for a very simple predictive machine.

This is the simple touch-sensitive creature I talked about in Brainstorm 6. The blue neurons receive inputs, from touch-sensitive nerve endings, which occurred some milliseconds ago on its skin. The red neuron shows two touch inputs being compared (in this case the cell has become tuned to fire if the right input is present just before the left input). I think we can call the red neuron an abstraction: it takes two concrete “I am being touched” inputs and creates an abstract fact – “I am being stroked leftwards here”. This abstraction then becomes an input for higher-level abstractions and so on.

The green neuron is then a prediction cell. It is saying, “if I’m being stroked leftwards at this point, then I expect to be touched here next.” Other predictions may be more conditional, requiring two or more abstractions, but in this case one abstraction is enough. The strength of the cell’s response is a measure of how likely it is that this will happen. The more often the prediction cell is firing at the moment the leftmost touch sensor is triggered, the stronger the connection will become, and the more often that this fails to happen, the weaker it will become (neurologically I’d hypothesize that this occurs due to LTP and LTD (long-term potentiation and long-term depression) in glutamate receptors, giving it an interesting nonlinear relationship to time).

So what do we DO with this prediction? I’m guessing that one consequence is surprise. If the touch sensor fires when the prediction wasn’t present, or the prediction occurs and nothing touches that sensor, then the creature needs a little jolt of surprise (purple neuron). Surprise should draw the creature’s attention to that spot, and alert it that something unexpected is happening. It may not be terribly surprising that a particular touch sensor fails to fire, but the cumulative effect of many unfulfilled predictions tells the creature that something needs to be worried about, at some level. On the other hand, if everything’s going according to expectations then no action need be taken and the creature can even remain oblivious.

But for the rest of my hypothesis to make sense, the prediction also needs to chain with other predictions. We need this to be possible so that top-down influences (not shown on the diagram) can assemble plans and daydreams, and see far into the future. But I believe there has to be an evolutionary imperative that predates this advanced capacity, and I’d guess that this is the need to see if a trend leads ultimately to pain or pleasure (or other changes in drives). Are we being stroked in such a way that it’s going to hurt when the stimulus reaches a tender spot? Or is the moving stimulus a hint that some food is on its way towards our mouth, which we need to start opening?

Now here comes my problem (or so I thought): In the diagram I’m assuming that the prediction gets mixed with the sensory signal (the green axon leading into the blue cell) so that predictions act like sensations. This way, the organism will react as if the prediction came true, leading to another prediction, and another. Eventually one of these predictions will predict pleasure or pain.

[Technical note: Connectionists wouldn’t think this way. They’d assume that pleasure/pain are back-propagated during learning, such that this first prediction neuron already “knows” how much pleasure or pain is likely to result further down the line, since this fact is stored in its synaptic weight(s). I’m not happy with this. For one thing, thinking is never going to arise in such a system, because it’s entirely reactive. Secondly (and this is perhaps why brains DO think), the reward value for this prediction is likely to be highly conditional upon other active predictions. This isn’t obvious in such a simple model, but in a complete organism the amount of pleasure/pain that ultimately results may depend very heavily on what else is going on. It may depend on the nature of the touch, or have its meaning changed radically by the context the creature is in (is it being threatened or is something having sex with it?). It’s therefore not possible to apportion a fixed estimate of reward by back-propagating it through the network. That sort of thing works up to a point in an abstract pattern-recognition network like a three-layer perceptron, but not in a real creature. In my humble opinion, anyway!]

Oh yes, my problem: So, if a prediction acts as if it were a sensation (and this is the only way it can make use of the subsequent (red) abstraction cells in order to make further predictions) then how does the organism know the difference between what is happening and what it merely suspects will happen??? If all these predictions are chained together, the creature will feel as if everything that might happen next already is happening.

This has bugged me for the past few days. But this morning I came to a somewhat counter-intuitive conclusion, which is that it really doesn’t matter.

What does “now” actually mean? We think of it as the infinitesimal boundary between past and future; between things that are as yet unknown and our memories. But now is not infinitesimal. I realized this in the shower. I was looking at the droplets of water spraying from the shower-head and realized that I can see them. This perhaps won’t surprise you, but it did me, because I’ve become so conditioned now to the view that the world I’m aware of is actually a predictive simulation of reality, not reality itself. This HAS to be true (although now is not the time to discuss it). And yet here I was, looking at actual reality. I wasn’t inventing these water droplets and I couldn’t predict their individual occurrence. Nor was the information merely being used to synchronize my model and keep my predictions in line with how things have actually turned out – I was consciously aware of each individual water droplet.

But I was looking at water that actually came out of my shower-head over a tenth of a second ago; maybe far longer. By the time the signals had caused retinal ganglion cells to fire, zoomed down my optic nerve, chuntered through my optic chiasm and lateral geniculate nucleus, and made their tortuous and mysterious way through my cortex, right up to the level of conscious awareness, those droplets were long gone. So I was aware of the past and only believed I was aware of the present. (In fact, just to make it more complex, I think I was aware of several pasts – the moment at which I “saw” the droplets was different from the moment that I knew that I’d seen the droplets.)

Yet at the same time, I was demonstrably aware of an anticipated present, based upon equally retarded but easier to extrapolate facts. I wasn’t simply responding to things that happened a large fraction of a second ago. If a fish had jumped out of the shower-head I’d certainly have been surprised and it would have taken me a while to get to grips with events, but for the most part I was “on top of the situation” and able to react to things as they were actually happening, even though I wouldn’t find out about them until a moment later. I was even starting bodily actions in anticipation of future events. If the soap had started to slip I’d have begun moving so that I could catch it where it was about to be, not where it was when I saw it fall. But for the most part my anticipations exactly canceled out my processing delays, so that, as far as I knew, I was living in the moment.

So I was simultaneously aware of events that happened a fraction of a second ago, as if they were happening now; events that I believed were happening now, even though I wouldn’t get confirmation of them for another fraction of a second; and events that hadn’t even happened yet (positioning my hands to catch the soap in a place it hadn’t even reached). ALL of these were happening at once, according to my brain; they all seemed like “now”.

Perhaps, therefore, these little predictive circuits really do act as if they are sensations. Perhaps the initial sensation is weak, and the predictions (if they are confident) build up to create a wave of activity whose peak is over a touch neuron that won’t actually get touched until some time in the future. Beyond a certain distance, the innate uncertainty or conditionality of each prediction would prevent the wave from extending indefinitely. Perhaps this blurred “sensation” is what we’re actually aware of. Perhaps for touch there’s an optimum distance and spread. In general, the peak of the wave should lie over the piece of skin that will probably get touched X milliseconds into the future, where X is the time it takes for an actual sensation to reach awareness or trigger an appropriate response. But it means the creature’s sense of “now” is smeared. Some information exists before the event; some reaches awareness at the very moment it is (probably) actually happening; the news that it DID actually happen arrives some time later. All of this is “now.”

Or perhaps not. After all, if I imagine something happening in my mind, it happens more or less in real time, as a narrative. I don’t see the ghosts of past, present and future superimposed. This, though, may be due to the high-level selection process that is piecing together the narrative. Perhaps the building blocks can only see a certain distance into the future. Primitive building blocks, like primary sensations, only predict a few milliseconds. Highly abstract building blocks, like “we’re in a bar; someone is offering me a drink” predict much further into the future, but only in a vague way. To “act out” what actually happens, these abstractions need to assemble chains of more primitive predictions to fill in the details, and so the brain always has to wait and see what happens in its own story, before initiating the next step. I’m not at all sure about this, but I can’t see any other way to assemble a complex, arbitrarily detailed, visual and auditory narrative inside one’s head without utilizing memories of how one thing leads to another at a wide range of abstractions. These memories have to have uses beyond conscious, deliberate thought, and so must be wired into the very process of perception. And in order for them to be chained together, the predicted outcomes need to behave as if they were stimuli.

I’m going to muse on this some more yet. For instance I have a hunch that attention plays a part in how far a chain of predictions can proceed (while prediction in turn drives attention), and I haven’t even begun to think about precisely how these simulations can be taken offline for use as plans or speculations, or precisely how this set-up maps onto motor actions (in which I believe intentions are seen as a kind of prediction). But this general architecture of abstractions and predictions is beginning to look like it might form the basis for my artificial brain. Of course there’s an awful lot of twiddly bits to add, but this seems like it might be a rough starting point from which to start painting in some details, and I have to start somewhere. Preferably soon.

Brainstorm 6: All change

In my last Brainstorming session I was musing on associations and asked myself what is being associated with what, that enables a brain to make a prediction (and hence perform simulations). A present state is clearly being associated with the state that tends to follow it, but what does that mean? It’s obvious for some forms of information but a lot less obvious for others and for the general case. Learning that one ten-million-dimension vector tends to follow another is neither practical nor intelligent – it doesn’t permit generalization, which is essential. Something more compact and meaningful is happening.

If the brain is to be able to imagine things, there must be a comprehensive simulation mechanism, capable of predicting the future state in any arbitrary scenario (as long as it’s sufficiently familiar). If I imagine a coffee cup in my hand and then tilt my imaginary hand, the cup falls. I can even get a fair simulation of how it will break when it hits the floor. If I imagine myself talking to someone, we can have a complete conversation that matches the kinds of thing this person might say in reality – I have a comprehensive simulation of their own mind inside mine. It’s comparatively easy to see how a brain might predict the future position of a moving stimulus on the retina, but a lot less obvious how this more general kind of simulation works. Coffee cups don’t have information about how they fall built into their properties, nor do they fall on a whim. Somehow it’s the entirety of the situation that matters – the interaction of cup and hand – and knowledge of falling objects in general (as well as the physical properties of pottery) somehow gets transferred automatically into the simulation as needed.

Pierre-Simon Laplace once said: “An intellect which at a certain moment would know all forces that set nature in motion, and all positions of all items of which nature is composed … the future just like the past would be present before its eyes.” In other words, if you know the current state of the universe precisely then you can work out its state at any time in the future. He wasn’t entirely right, as it happens – if Laplace was himself that intellect, then he would also be part of the universe, and so the act of gathering the data would change some of the data he needed to gather. He could never have perfect knowledge. And we know now that the most infinitesimal inaccuracy will magnify very rapidly until the prediction is out of whack with reality. But even so, in practical terms determinism works. If our artificial brain knew everything it was capable of knowing about the state of its region of the universe (in other words, the value of a ten-million-dimensional vector) then it would have enough knowledge to make a fair stab at the value of this vector a short while later. If that weren’t true, intelligence wouldn’t be possible.

But Laplace had a very good point when he mentioned “all forces that set nature in motion.” It’s not just the state of the world that matters, but the rate and direction of change. It’s an interesting philosophical question, how an object can embody a rate of change at an instant in time (discuss!). It has a momentum, but that’s dodging the issue. Nevertheless, change is all-important, and real brains are far more interested in change than they are in static states. In fact they’re more-or-less blind to things that don’t change – quite literally. If you can hold your eyes perfectly still when focusing on a fixed point, you’ll go temporarily blind in a matter of seconds! Try it – it’s not easy but it can be done with practice and it’s quite startling.

Getting preoccupied with recognizing objects, etc. fails to help me with this question of prediction, and vision is misleading because it’s essentially a movement-detection system that has been heavily modified by evolution to make it possible to establish facts about things that aren’t moving. The static world is essentially transformed into a moving one (e.g. through microsaccades) before being analyzed in ways we don’t understand and may never be able to, unless we understand how change and prediction are handled more generally. So how about our tactile sense? Maybe that’s a good model to think about for a while?

Ok, I’ll start with a very simple creature – a straight line, with touch sensors along its surface. If I touch this creature with my finger one of the sensors will be triggered (because its input has changed), but will soon become silent again as the nerve ending habituates. At this point the creature can make a prediction, but not a very useful one: my finger might move left or it might move right. It can’t tell which at first, but if my finger starts to move left, it can immediately predict where it’s going to go next. It’s easy to imagine a neuron connected to a pair of adjacent sensors, which will fire when one sensor is triggered before the other.

Eureka! We have a prediction neuron – it knows that the third sensor in the line is likely to be triggered shortly. In fact we can imagine a whole host of these neurons, tuned to different delays and hence sensitive to speed. Each one can make a prediction about which other sensors are likely to be touched within a given period. We can imagine each neuron feeding some kind of information back down to the sensor that it is predicting will be touched. The neurons have a memory of the past, which they can compare to the present in order to establish future trends. The more abstract this memory, the more we can describe it as forming our present context. Context is all-important. If you’ve ever woken from a general anesthetic, you’ll know that it takes a while to re-establish a context – who you are, where you are, how you got there – and until you have this you can’t figure out what’s likely to happen next.

So far, so good. We have a reciprocal connection of the kind that seems to be universal in the brain. We can imagine a further layer of neurons that listen to these simpler neurons and develop a more general sense of the direction and speed of movement, which is less dependent on the actual location of the stimulus. By the time we get a few layers deep, we have cells that can tell us if the stroking of my finger is deviating from a straight line (well, we could if my simplified creature wasn’t one-dimensional!).

But what’s the point of feeding back this information to the sensory neurons themselves? The first layer of cells is telling specific sensory neurons to expect to be touched in a few milliseconds. Big deal – they’ll soon find out anyway. Nevertheless, two valuable pieces of information come out of this prediction:

Firstly, if a sensory neuron is told to expect a touch and it doesn’t arrive, we want our creature to be surprised. Things that just behave according to expectations can usually be safely ignored, and we only want to be alerted to things that don’t do what we were expecting. Surprise gives us a little shock – it causes a bunch of physiological responses. We may get a little burst of adrenaline, to prepare us in case we need to act, and our other sensory systems get alerted to pay more attention to the source of the unexpected change (this is called an “orienting response”). Neurons higher up in the system are thus primed and able to make decisions about what, if anything, to do about this unexpected turn of events. The shock will ripple up the system until something finally knows what to do about that sort of thing. Most of the time this will be an unconscious response (like when we flick an insect off our arm) but sometimes nothing will know how to deal with this, and consciousness needs to get in on the act.

Secondly, once we have a hunch about where the stimulus is going to show up next, we can start to look further ahead to where it is likely to be heading. The more often our low-level predictions are confirmed, the more confident we can be, and the more time we’ve had in which to make this ripple of predictive activity travel ahead of the stimulus, to figure out what might happen in a few moments’ time. Perhaps my finger is stroking along the creature towards a tender spot that will hurt it; perhaps it’s moving in the other direction, towards the creature’s mouth, where it has a hope of eating my finger. Pain or pleasure get predicted, and behavior results whenever one or the other seems likely.

We have to presume that all of this stuff wires itself up through experience – by association. The first layer of sensory neurons learns when the sensor it is associated with is about to be touched, by understanding statistical relationships between the states of neighboring sensors. These first-level neurons presumably cooperate and compete with each other to ensure that each one develops a unique tuning and all possible circumstances get represented (this is exactly homologous, IMHO, to what happens in primary visual cortex, with edge-orientation/motion-sensitive neurons). The higher layers, which make longer-term predictions, learn to associate certain patterns of movement with pain or pleasure. The most abstract layers are presumably capable of learning that certain responses maximize pleasure or minimize pain.

Leaving aside the question of how these responses get coordinated, we now have a complete behavioral mechanism. And it’s NOT a stimulus-response system. The behavior is being triggered by predictions of what is about to happen, not what has just happened (this is a moot point and you may object that the system is still responding to the past stimuli, but I think an essential threshold has been crossed here and it’s fair to call this an anticipatory mechanism).

It’s clear that somehow the prediction needs to be compared to reality, and surprise should be generated if they don’t match, and it’s clear that predictions need to be able to associate themselves with reward. Somehow predictions also need to take part in servo action – actions are goal-directed, and hence are themselves predictions of a future state. Comparing what your sensors predict is going to happen, to what you intend to happen, is what allows you to make anticipatory changes and bring reality into line with your intentions. I need to think about that a bit, though.

But what about the ability to use this predictive mechanism to imagine possible futures? We presumably now have the facility to imagine a high-level construct, such as “let’s suppose I’m feeling someone stroke my skin” and actually feel the stroke occurring, as these higher-level neurons pass down their predictions to lower levels at which individual touch sensors are told to expect/pretend they’ve been stimulated. Although obviously this time we shouldn’t be surprised when nothing happens! The surprise response needs to be suppressed, and somehow the predictions ought to stand in for the sensations. That has implications for the wiring and all sorts of questions remain unresolved here.

It’s much harder, though, to see how we can assemble an entire context in our heads – the hand and the coffee cup, say. Coffee cups only fall when hands drop them. Dropping something only occurs when a hand is placed at a certain set of angles. A motor action is associated with a visual change, but only in a particular class of contexts, and the actual visual change is also highly context-dependent: If a cup was in your hand, that’s what you’ll see fall. Remarkably, if you imagine holding a little gnome in your hand instead, what you’ll see is a falling gnome, not a falling cup, even if you’ve never actually dropped a minuscule fantasy creature before in your life! In fact your imaginary gnome may even surprise you by leaping to safety! Somehow the properties of objects are able to interact in a highly generalizable way, and these interactions can trigger mental imagery, which eventually trickles down to the actual sensory system as if they’d really occurred (there are several lines of evidence to suggest that when we imagine something we “see” it using the same parts of our visual system that would be active if we’d really seen it).

Somehow the brain encodes cause and effect, at many levels, in a generalizable way. Complex chains of inference occur when we mentally decide to rotate our hand and see what happens to the thing it was holding, and the ability to make these inferences must arise from statistical learning that is designed to predict future states from past ones.

And somehow I have to come up with just such a general scheme, but at a level of abstraction suitable for a game. My creatures are not going to be covered in touch sensors or see the world in terms of moving colored pixels. It’s a shame really, because I understand these things at the low level – it’s the high level that still eludes me…

P.S. This post got auto-linked to a post on the question of why we can’t tickle ourselves (I’m assuming you’re not schizophrenic here, or you won’t know what I’m talking about, because you can!). We can’t tickle ourselves because our brain knows the difference between things we do and things that get done to us (self/non-self determination). If we try to tickle ourselves, we predict there will be a certain sensation and this prediction is used to cancel out the actual sensation. It’s pretty important for an organism to differentiate between things it does to the world and things the world does to it (bumping into something feels the same as being bumped into, but the appropriate responses are different). So here’s another pathway that requires anticipation, and another example of the brain as a simulation engine.

Brainstorm 5: joining up the dots

I promised myself I’d blog about my thoughts, even if I don’t really have any and keep going round in circles. Partly I just want to document the creative process honestly – so this includes the inevitable days when things aren’t coming together – and partly it helps me if I try to explain things to people. So permit me to ramble incoherently for a while.

I’m trying to think about associations. In one sense the stuff I’ve already talked about is associative: a line segment is an association between a certain set of pixels. A cortical map that recognizes faces probably does so by associating facial features and their relative positions. I’m assuming that each of these things is then denoted by a specific point in space on the real estate of the brain – oriented lines in V1 and faces in the FFA. In both these cases there are several features at one level, which are associated and brought together at a higher level. A bunch of dots maketh one line. Two dark blobs and a line in the right arrangement maketh a face. A common assumption (which may not be true) is that neurons do this explicitly: the dendritic field of a visual neuron might synapse onto a particular pattern of LGN fibres carrying retinal pixel data. When this pattern of pixels becomes active, the neuron fires. That specific neuron – that point on the self-organizing map – therefore means “I can see a line at 45 degrees in this part of the visual field.”

But the brain also supports many other kinds of associative link. Seeing a fir tree makes me think of Christmas, for instance. So does smelling cooked turkey. Is there a neuron that represents Christmas, which synapses onto neurons representing fir trees and turkeys? Perhaps, perhaps not. There isn’t an obvious shift in levels of representation here.

Not only do turkeys make me think of Christmas, but Christmas makes me think of turkeys. That implies a bidirectional link. Such a thing may actually be a general feature, despite the unidirectional implication of the “line-detector neuron” hypothesis. If I imagine a line at 45 degrees, this isn’t just an abstract concept or symbol in my mind. I can actually see the line. I can trace it with my finger. If I imagine a fir tree I can see that too. So in all likelihood, the entire abstraction process is bidirectional and thus features can be reconstructed top-down, as well as percepts being constructed/recognized bottom-up.

But even so, loose associations like “red reminds me of danger” don’t sound like the same sort of association as “these dots form a line”. A line has a name – it’s a 45-degree line at position x,y – but what would you call the concept that red reminds me of danger? It’s just an association, not a thing. There’s no higher-level concept for which “red” and “danger” are its characteristic features. It’s just a nameless fact.

How about a melody? I know hundreds of tunes, and the interesting thing is, they’re all made from the same set of notes. The features aren’t what define a melody, it’s the temporal sequence of those features; how they’re associated through time. Certainly we can’t imagine there being a neuron that represents “Auld Lang Syne”, whose dendrites synapse onto our auditory cortex’s representations of the different pitches that are contained in the tune. The melody is a set of associations with a distinct sequence and a set of time intervals. If someone starts playing the tune and then stops in the middle I’ll be troubled, because I’m anticipating the next note and it fails to arrive. Come to that, there’s a piano piece by Rick Wakeman that ends in a glissando, and Wakeman doesn’t quite hit the last note. It drives me nuts, and yet how do I even know there should be another note? I’m inferring it from the structure. Interestingly, someone could play a phrase from the middle of “Auld Lang Syne” and I’d still be able to recognize it. Perhaps the tune is represented by many overlapping short pitch sequences? But if so, then this cluster of representations is collectively associated with its title and acts as a unified whole.

Thinking about anticipating the next note in a tune reminds me of my primary goal: a representation that’s capable of simulating the world by assembling predictions. State A usually leads to state B, so if I imagine state A, state B will come to mind next and I’ll have a sense of personal narrative. I’ll be able to plan, speculate, tell myself stories, relive a past event, relive it as if I’d said something wittier at the time, etc. Predictions are a kind of association too, but between what? A moving 45-degree line at one spot on the retina tends to lead to the sensation of a 45-degree line at another spot, shortly afterwards. That’s a predictive association and it’s easy to imagine how such a thing can become encoded in the brain. But Turkeys don’t lead to Christmas. More general predictions arise out of situations, not objects. If you see a turkey and a butcher, and catch a glint in the butcher’s eye, then you can probably make a prediction, but what are the rules that are encoded here? What kind of representation are we dealing with?

“Going to the dentist hurts” is another kind of association. “I love that woman” is of a similar kind. These are affective associations and all the evidence shows that they’re very important, not only for the formation of memories (which form more quickly and thoroughly when there’s some emotional content), but also for the creation of goal-directed behavior. We tend to seek pleasure and avoid pain (and by the time we’re grown up, most of us can even withstand a little pain in the expectation of a future reward).

A plan is the predictive association of events and situations, leading from a known starting point to a desired goal, taking into account the reward and punishment (as defined by affective associations) along the route. So now we have two kinds of association that interact!

To some extent I can see that the meaning of an associative link is determined by what kind of thing it is linking. The links themselves may not be qualitatively different – it’s just the context. Affective associations link memories (often episodic ones) with the emotional centers of the brain (e.g. the amygdala). Objects can be linked to actions (a hammer is associated with a particular arm movement). Situations predict consequences. Cognitive maps link objects with their locations. Linguistic areas link objects, actions and emotions with nouns, verbs and adjectives/adverbs. But there do seem to be some questions about the nature of these links and to what extent they differ in terms of circuitry.

Then there’s the question of temporary associations. And deliberate associations. Remembering where I left my car keys is not the same as recording the fact that divorce is unpleasant. The latter is a semantic memory and the former is episodic, or at least declarative. Tomorrow I’ll put my car keys down somewhere else, and that will form a new association. The old one may still be there, in some vague sense, and I may one day develop a sense of where I usually leave my keys, but in general these associations are transient (and all too easily forgotten).

Binding is a form of temporary association. That ball is green; there’s a person to my right; the cup is on the table.

And attention is closely connected with the formation or heightening of associations. For instance, in Creatures I had a concept called “IT”. “IT” was the object currently being attended to, so if a norn shifted its attention, “IT” would change, and if the norn decided to “pick IT up”, the verb knew which noun to apply to. In a more sophisticated artificial brain, this idea has to be more comprehensive. We may need two or more ITs, to form the subject and object of an action. We need to remember where IT is, in various coordinate frames, so that we can reach out and grab IT or look towards IT or run away from IT. We need to know how big IT is, what color IT is, who IT belongs to, etc. These are all associations.

Perhaps there are large-scale functional associations, too. In other words, data from one space can be associated with another space temporarily to perform some function. What came to mind that made me think of this is the possibility that we have specialized cortical machinery for rotating images, perhaps developed for a specific purpose, and yet I can choose, any time I like, to rotate an image of a car, or a cat, or my apartment. If I imagine my apartment from above, I’m using some kind of machinery to manipulate a particular set of data points (after all, I’ve never seen my apartment from above, so this isn’t memory). Now I’m imagining my own body from above – I surely can’t have another machine for rotating bodies, so somehow I’m routing information about the layout of my apartment or the shape of my body through to a piece of machinery (which, incidentally, is likely to be cortical and hence will have self-organized using the same rules that created the representation of my apartment and the ability to type these words). Routing signals from one place to another is another kind of association.

Language is interesting (I realize that’s a bit of an understatement!). I don’t believe the Chomskyan idea that grammar is hard-wired into the brain. I think that’s missing the point. I prefer the perspective that the brain is wired to think, and grammar is a reflection of how the brain thinks. [noun][verb][noun] seems to be a fundamental component of thought. “Janet likes John.” “John is a boy.” “John pokes Janet with a stick.” Objects are associated with each other via actions, and both the objects and actions can be modulated (linguistically, adverbs modulate actions; adjectives modify or specify objects). At some level all thought has this structure, and language just reflects that (and allows us to transfer thoughts from one brain to another). But the level at which this happens can be very far removed from that of discrete symbols and simple associations. Many predictions can be couched in linguistic terms: IF [he] [is threatening] [me] AND [I][run away from][him] THEN [I][will be][safe]. IF [I][am approaching][an obstacle]AND NOT ([I][turn]) THEN [I][hurt]. But other predictions are much more fluid and continuous: In my head I’m imagining water flowing over a waterfall, turning a waterwheel, which turns a shaft, which grinds flour between two millstones. I can see this happening – it’s not just a symbolic statement. I can feel the forces; I can hear the sound; I can imagine what will happen if the water flow gets too strong and the shaft snaps. Symbolic representations and simple linear associations won’t cut it to encode such predictive power. I have a real model of the laws of physics in my head, and can apply it to objects I’ve never even seen before, then imagine consequences that are accurate, visual and dynamic. So at one level, grammar is a good model for many kinds of association, including predictive associations, but at another it’s not. Are these the same processes – the same basic mechanism – just operating at different levels of abstraction, or are they different mechanisms?

These predictions are conditional. In the linguistic examples above, there’s always an IF and a set of conditionals. In the more fluid example of the imaginary waterfall, there are mathematical functions being expressed, and since a function has dependent variables, this is a conditional concept too. High-level motor actions are also conditional: walking consists of a sequence of associations between primitive actions, modulated by feedback and linked by conditional constructs such as “do until” or “do while”.

So, associations can be formed and broken, switched on and off, made dependent on other associations, apply specifically or broadly, embody sequence and timing and probability, form categories and hierarchies or link things without implying a unifying concept. They can implement rules and laws as well as facts. They may or may not be commutative. They can be manipulated top-down or formed bottom-up… SOMEHOW all this needs to be incorporated into a coherent scheme. I don’t need to understand how the entire human brain works – I’m just trying to create a highly simplified animal-like brain for a computer game. But brains do some impressive things (nine-tenths of which most AI researchers and philosophers forget about when they’re coming up with new theories). I need to find a representation and a set of mechanisms for defining associations that have many of these properties, so that my creatures can imagine possible futures, plan their day, get from A to B and generalize from past experiences. So far I don’t have any great ideas for a coherent and elegant scheme, but at least I have a list of requirements, now.

I think the next thing to do is think more about the kinds of representation I need – how best to represent and compute things like where the creature is in space, what kind of situation it is in, what the properties of objects are, how actions are performed. Even though I’d like most of this to emerge spontaneously, I should at least second-guess it to see what we might be dealing with. If I lay out a map of the perceptual and motor world, maybe the links between points on this map (representing the various kinds of associations) will start to make sense.

Or I could go for a run. Yes, I like that thought better.

Brainstorm 4 – squishing hyperspace

Ok, back to work. I wanted to expand on what I was saying about the cortex as a map of the state of the world, before I get onto the topic of associations.

Imagine the brain as a ten-million-dimensional hypercube. Got that?

Hmm, maybe I should backtrack a bit. Let’s suppose that the brain has a total of ten million sensory inputs and motor outputs (each one being a nerve fiber coming in from the skin, the retina, the ear, etc., or going out to a muscle or gland). For sake of argument (and I appreciate the dangers in this over-simplification), imagine that each nerve signal can have one of 16 amplitudes. Every single possible experience that a human being is capable of having is therefore representable as a point in a ten-million-dimensional graph, and since we have only 16 points per axis we need only 16 raised to the power of ten million points to represent everything that can happen to us (including all the things we could possibly do to the world, although we probably need to factor in another few quadrillion points to account for our internal thoughts and feelings).

(If you’re not used to this concept of phase space, imagine that the brain has only two inputs and one output. A three-dimensional graph would therefore be enough to represent every possible combination of those values: the value of input 1 is a distance along the X-axis, input 2 is along the Y-axis and the output value is along the Z-axis. Where these three lines meet is the point that represents this unique state. A change of state is represented by an arrow connecting two points. Everything that can happen to that simplified brain – every experience and thought and reaction it is capable of – can be described by points, lines and surfaces within that space. It’s a powerful way to think about many kinds of system, not just brains. OK, so now just expand that model and imagine it in 10,000,000-dimensional space and you’re in business!)

Er, so that’s quite a big number. If each point were represented by an atom, the entire universe would get completely lost in some small dark corner of this space and never be seen again. Luckily for us, no single human being ever actually experiences more than an infinitesimal fraction of it. When did you last stand on one foot, scratching your left ear, looking at a big red stripe surrounded by green sparkles, whistling the first bar of the Hallelujah Chorus? Not lately, I’m guessing. So we only need to represent those states we actually experience, and then only if they turn out to be useful in some way. Of course we don’t immediately know whether they’re going to turn out useful, so we need a way to represent them as soon as we experience them and then forget them again if they turn out to be irrelevant.

Thus far, this is the line of thinking that I used when I designed the Creatures brains. Inside norns, neurons wire themselves up to represent short permutations of input patterns as they’re experienced, and then connect to other neurons representing possible output patterns. Pairs of neurons equate to points in the n-dimensional space of a norn’s brain, but only a small fraction of that possible space needs to be represented in one creature’s lifetime. These representations fade out unless they get reinforced by punishment or reward chemicals, and the neural network learns to associate certain input patterns with the most appropriate output signal. All these experiences compete with each other for the right to be represented, such that only the most relevant remain and old memories are wiped out if more space is needed. There’s also an implicit hierarchy in the representations (due to the existence of simpler permutations) that allows the norns to generalize – they have a hunch about how to react to new situations, based on previous similar ones.

There’s a great deal more complexity to the Norns’ brains than this and I managed to solve some quite interesting problems. I’m not sure that anyone else has designed such a comprehensive artificial brain and actually made it work, either before or in the 18 years since. But nevertheless, basically this design was a pile of crap. For one thing, there was no order to this space. Point 1,2,3 wasn’t close to point 1,2,4 in the phase space – the points were just in a list, essentially, and there was no geometry to the space. The creatures’ brains were capable of limited generalization because of the hierarchy (too long a story for now) but I really wanted generalization to fall out of the spatial relationships: If you don’t know what to do in response to situation x,y,z, try stimulating the neighboring points, because they represent qualitatively similar situations and you may already have learned how best to react to them. The sum of these “recommendations” is a good bet for how to react to this novel situation. Sometimes this won’t be true, in fact, and that requires the brain to draw boundaries between things that are similar and yet require different responses (a toy alligator is very similar to a real one, and yet…). This is called categorization (and comes in two flavors: perceptual and functional – my son Chris did his PhD on functional categorization). Anyway, basically, we need the n-dimensional phase space to be collapsed down (or projected) into two dimensions (assuming the neural network is a flat sheet), such that representations of similar situations end up lying near to each other.

(At this point, some of you may be astute enough to ask: why collapse n dimensions down to two at all? The human cortex is a flat sheet, so biology has little choice, but we can represent any number of dimensions in a computer with as much ease as two. This is true, but only in principle. In practice, computers are nowhere near big enough to hold a massively multi-dimensional array of 16 elements per dimension (say we only need a mere one hundred dimensions – that’s already 2×10^111 gigabytes!), so we have to find some scheme for collapsing the space while retaining some useful spatial relationships. It could be a list, but why not a 2D surface, since that’s roughly what the brain uses and hence we can look for hints from biology?)

There is no way to do this by simple math alone, because to represent even three dimensions on a two-dimensional surface, the third dimension needs to be broken up into patches and some contiguity will be lost. For instance, imagine a square made from 16×16 smaller squares, each of which is made from 16 stripes. This flattens a 16x16x16 cube into two dimensions. But although point 1,1,2 is close to point 1,1,3 (they’re on neighboring stripes), it’s not close to point 1,2,2, because other stripes get in the way. You can bring these closer together by dividing the space up in a different way, but that just pushes other close neighbors apart instead. Which is the best arrangement as far as categorization and generalization are concerned? One arrangement might work best in some circumstances but not others. When you try to project a 16x16x16x16x16x16x16-point hypercube into two dimensions this becomes a nightmare.

The real brain clearly tries its best to deal with this problem by self-organizing how it squishes 10,000,000 dimensions into two. You can see this in primary visual cortex, where the 2D cortical map is roughly divided up retinotopically (i.e. matching the two-dimensional structure of the retina, and hence the visual scene). But within this representation there are whorls (not stripes, although stripes are found elsewhere) in which a third and fourth dimension (edge-orientation and direction of motion) are represented. Orientation is itself a collapsing down of two spatial dimensions – simply recording the angle of a line instead of the set of points that make it up (that’s partly what a neuron does – it describes a spatial pattern of inputs by a single point). Here we see one of the many clever tricks that the brain uses: The visual world (at least as far as the change-sensitive nature of neurons is concerned) is made up of line segments. Statistically, these are more common than other arbitrary patterns of dots. So visual cortex becomes tuned to recognize only these patterns and ignore all the others (at least in this region – it probably represents textures, etc. elsewhere). The brain is thus trying its best, not only to learn the statistical properties and salience of those relatively few points its owner actually visits in the ten-million-dimensional world of experience, but also to represent them in a spatial arrangement that best categorizes and associates them. It does this largely so that we don’t have to learn something all over again, just because the situation is slightly different from last time.

So, finding the best mechanism for projecting n-dimensional space into two or three dimensions, based on the statistics and salience of stimuli, is part of the challenge of designing an artificial brain. That much I think I can do, up to a point, although I won’t trouble you with how, right now.

I will just mention in passing that there’s a dangerous assumption that we should be aware of. The state space of the brain is discrete, because information arrives and leaves via a discrete number of nerve fibers. The medium for representing this state space is also discrete – a hundred billion neurons. HOWEVER, this doesn’t mean the representation itself is discrete. I suspect the real brain is so densely wired that it approximates a continuous medium, and this is important for a whole host of things. It’s probably very wrong to implicitly equate one neuron with one point in the space or one input pattern. Probably the information in the brain is stored holistically, and each neuron makes a contribution to multiple representations, while each representation is smeared across many (maybe very many) neurons. How much I need to, or can afford to, take account of this for such a pragmatic design remains to be seen. It may be an interesting distraction or it may be critical.

Anyway, besides this business of how best to represent the state space of experience, there are other major requirements I need to think about. In Creatures, the norns were reactive – they learned how best to respond to a variety of situations, and when those situations arose in future, this alone would trigger a response. They were thus stimulus-response systems. Yeuch! Nasssty, nassty behaviourist claptrap! Insects might (and only might) work like that, but humans certainly don’t (except in the more ancient parts of our brains). Probably no mammals do, nor birds. We THINK. We have internal states that change over time, even in the absence of external changes. Our thoughts are capable of linking things up in real-time, to create routes and plans and other goal-directed processes. Our “reactions” are really pre-actions – we don’t respond to what’s just happened but to what we believe is about to happen. We can disengage from the world and speculate, hope, fear, create, invent. How the hell do we do this?

Well, the next step up, after self-organizing our representations, is to form associations between them. After that comes dynamics – using these associations to build plans and speculations and to simulate the world around us inside the virtual world of our minds. This post has merely been a prelude to thinking about how we might form associations, how these relate to the underlying representations, what these associations need to be used for, and how we might get some kind of dynamical system out of this, instead of just a reactive one. I just wanted to introduce the notion of state space for those who aren’t used to it, and talk a little about collapsing n-dimensional space into fewer dimensions whilst maximizing utility. Up until now I’ve just been bringing you up to speed. From my next post onward I’ll be feeling my own way forward. Or maybe just clutching at straws…