Blowing my own trumpet

Okay, try not to cringe, but I really need your help. In the interests of full disclosure, that means money. Or if not money then influence. Please nicely.

I’ll just hit you with the funding pitch right off the bat. There’s a fancy widget I’m supposed to be able to embed in my blog but it doesn’t work in this theme, so here instead is a good old-fashioned hyperlink. Click on the image to take you to Kickstarter.

This is the first chance I’ve had to blog about it, because it’s taken off a lot more quickly than I expected and I’ve had a lot of people to thank and queries to field! It’s only the end of Day Two as I write this and the total is already over $11,000, much to my amazement and thanks especially to some extremely generous donors. I think there’s a real chance we can make this happen, with your help. Which is just as well, because I’ve almost completely used up my own resources after all these years of self-funded research and this is the only way I can continue with my work.

If you’ve already pledged then thank you SO MUCH! I really, really appreciate it. If you haven’t and you’d like to then that’s fantastic. My Creatures game inspired quite a lot of people to think differently about life, and even caused a number of them to take up scientific careers. I’m pretty sure this game will do the same, so it’s in a good cause as well as hopefully being fun. If you aren’t in a position to pledge then I quite understand – I’m not either! – but if you can help spread the word by tweeting, blogging, facebooking or pinning notices to telegraph poles then I really appreciate that too. The wider the news spreads, the more chance I have. Thank you.

Oh, and I see 600 people visited my blog today, which is a fair bit higher than usual, so if you came here via Kickstarter then I’m delighted to see you. I hope you’ll come back! :-)

Incidentally, earlier posts about the design of the artificial brain for this project can be found here, here, here, here, here, here and here. After that I went a bit quiet because I got stuck on a problem that was too complex even to tell you about. But I think I have the answer to that now. After months of banging my head against the wall it just came to me – poof! – while I was driving through the desert thinking about something else. Don’t you just love it when that happens?

[Edit: I fixed the links – whoops.]

So how’s it going?

Just a short post to say that I’m going to tweet my programming journal in real-time, as I work on my new game, so if any of you are fellow Twits, feel free to follow @enchantedloom. I don’t really understand Twitter yet, and 140 characters is just not ‘me’ somehow, but it seems like a good way to keep my nose to the grindstone (or avoid any actual work, possibly) and at the same time let you guys know how things are going. I’d appreciate the company, so see you in Twit-land maybe!

Brainstorm 5: joining up the dots

I promised myself I’d blog about my thoughts, even if I don’t really have any and keep going round in circles. Partly I just want to document the creative process honestly – so this includes the inevitable days when things aren’t coming together – and partly it helps me if I try to explain things to people. So permit me to ramble incoherently for a while.

I’m trying to think about associations. In one sense the stuff I’ve already talked about is associative: a line segment is an association between a certain set of pixels. A cortical map that recognizes faces probably does so by associating facial features and their relative positions. I’m assuming that each of these things is then denoted by a specific point in space on the real estate of the brain – oriented lines in V1 and faces in the FFA. In both these cases there are several features at one level, which are associated and brought together at a higher level. A bunch of dots maketh one line. Two dark blobs and a line in the right arrangement maketh a face. A common assumption (which may not be true) is that neurons do this explicitly: the dendritic field of a visual neuron might synapse onto a particular pattern of LGN fibres carrying retinal pixel data. When this pattern of pixels becomes active, the neuron fires. That specific neuron – that point on the self-organizing map – therefore means “I can see a line at 45 degrees in this part of the visual field.”

But the brain also supports many other kinds of associative link. Seeing a fir tree makes me think of Christmas, for instance. So does smelling cooked turkey. Is there a neuron that represents Christmas, which synapses onto neurons representing fir trees and turkeys? Perhaps, perhaps not. There isn’t an obvious shift in levels of representation here.

Not only do turkeys make me think of Christmas, but Christmas makes me think of turkeys. That implies a bidirectional link. Such a thing may actually be a general feature, despite the unidirectional implication of the “line-detector neuron” hypothesis. If I imagine a line at 45 degrees, this isn’t just an abstract concept or symbol in my mind. I can actually see the line. I can trace it with my finger. If I imagine a fir tree I can see that too. So in all likelihood, the entire abstraction process is bidirectional and thus features can be reconstructed top-down, as well as percepts being constructed/recognized bottom-up.

But even so, loose associations like “red reminds me of danger” don’t sound like the same sort of association as “these dots form a line”. A line has a name – it’s a 45-degree line at position x,y – but what would you call the concept that red reminds me of danger? It’s just an association, not a thing. There’s no higher-level concept for which “red” and “danger” are its characteristic features. It’s just a nameless fact.

How about a melody? I know hundreds of tunes, and the interesting thing is, they’re all made from the same set of notes. The features aren’t what define a melody, it’s the temporal sequence of those features; how they’re associated through time. Certainly we can’t imagine there being a neuron that represents “Auld Lang Syne”, whose dendrites synapse onto our auditory cortex’s representations of the different pitches that are contained in the tune. The melody is a set of associations with a distinct sequence and a set of time intervals. If someone starts playing the tune and then stops in the middle I’ll be troubled, because I’m anticipating the next note and it fails to arrive. Come to that, there’s a piano piece by Rick Wakeman that ends in a glissando, and Wakeman doesn’t quite hit the last note. It drives me nuts, and yet how do I even know there should be another note? I’m inferring it from the structure. Interestingly, someone could play a phrase from the middle of “Auld Lang Syne” and I’d still be able to recognize it. Perhaps the tune is represented by many overlapping short pitch sequences? But if so, then this cluster of representations is collectively associated with its title and acts as a unified whole.

Thinking about anticipating the next note in a tune reminds me of my primary goal: a representation that’s capable of simulating the world by assembling predictions. State A usually leads to state B, so if I imagine state A, state B will come to mind next and I’ll have a sense of personal narrative. I’ll be able to plan, speculate, tell myself stories, relive a past event, relive it as if I’d said something wittier at the time, etc. Predictions are a kind of association too, but between what? A moving 45-degree line at one spot on the retina tends to lead to the sensation of a 45-degree line at another spot, shortly afterwards. That’s a predictive association and it’s easy to imagine how such a thing can become encoded in the brain. But Turkeys don’t lead to Christmas. More general predictions arise out of situations, not objects. If you see a turkey and a butcher, and catch a glint in the butcher’s eye, then you can probably make a prediction, but what are the rules that are encoded here? What kind of representation are we dealing with?

“Going to the dentist hurts” is another kind of association. “I love that woman” is of a similar kind. These are affective associations and all the evidence shows that they’re very important, not only for the formation of memories (which form more quickly and thoroughly when there’s some emotional content), but also for the creation of goal-directed behavior. We tend to seek pleasure and avoid pain (and by the time we’re grown up, most of us can even withstand a little pain in the expectation of a future reward).

A plan is the predictive association of events and situations, leading from a known starting point to a desired goal, taking into account the reward and punishment (as defined by affective associations) along the route. So now we have two kinds of association that interact!

To some extent I can see that the meaning of an associative link is determined by what kind of thing it is linking. The links themselves may not be qualitatively different – it’s just the context. Affective associations link memories (often episodic ones) with the emotional centers of the brain (e.g. the amygdala). Objects can be linked to actions (a hammer is associated with a particular arm movement). Situations predict consequences. Cognitive maps link objects with their locations. Linguistic areas link objects, actions and emotions with nouns, verbs and adjectives/adverbs. But there do seem to be some questions about the nature of these links and to what extent they differ in terms of circuitry.

Then there’s the question of temporary associations. And deliberate associations. Remembering where I left my car keys is not the same as recording the fact that divorce is unpleasant. The latter is a semantic memory and the former is episodic, or at least declarative. Tomorrow I’ll put my car keys down somewhere else, and that will form a new association. The old one may still be there, in some vague sense, and I may one day develop a sense of where I usually leave my keys, but in general these associations are transient (and all too easily forgotten).

Binding is a form of temporary association. That ball is green; there’s a person to my right; the cup is on the table.

And attention is closely connected with the formation or heightening of associations. For instance, in Creatures I had a concept called “IT”. “IT” was the object currently being attended to, so if a norn shifted its attention, “IT” would change, and if the norn decided to “pick IT up”, the verb knew which noun to apply to. In a more sophisticated artificial brain, this idea has to be more comprehensive. We may need two or more ITs, to form the subject and object of an action. We need to remember where IT is, in various coordinate frames, so that we can reach out and grab IT or look towards IT or run away from IT. We need to know how big IT is, what color IT is, who IT belongs to, etc. These are all associations.

Perhaps there are large-scale functional associations, too. In other words, data from one space can be associated with another space temporarily to perform some function. What came to mind that made me think of this is the possibility that we have specialized cortical machinery for rotating images, perhaps developed for a specific purpose, and yet I can choose, any time I like, to rotate an image of a car, or a cat, or my apartment. If I imagine my apartment from above, I’m using some kind of machinery to manipulate a particular set of data points (after all, I’ve never seen my apartment from above, so this isn’t memory). Now I’m imagining my own body from above – I surely can’t have another machine for rotating bodies, so somehow I’m routing information about the layout of my apartment or the shape of my body through to a piece of machinery (which, incidentally, is likely to be cortical and hence will have self-organized using the same rules that created the representation of my apartment and the ability to type these words). Routing signals from one place to another is another kind of association.

Language is interesting (I realize that’s a bit of an understatement!). I don’t believe the Chomskyan idea that grammar is hard-wired into the brain. I think that’s missing the point. I prefer the perspective that the brain is wired to think, and grammar is a reflection of how the brain thinks. [noun][verb][noun] seems to be a fundamental component of thought. “Janet likes John.” “John is a boy.” “John pokes Janet with a stick.” Objects are associated with each other via actions, and both the objects and actions can be modulated (linguistically, adverbs modulate actions; adjectives modify or specify objects). At some level all thought has this structure, and language just reflects that (and allows us to transfer thoughts from one brain to another). But the level at which this happens can be very far removed from that of discrete symbols and simple associations. Many predictions can be couched in linguistic terms: IF [he] [is threatening] [me] AND [I][run away from][him] THEN [I][will be][safe]. IF [I][am approaching][an obstacle]AND NOT ([I][turn]) THEN [I][hurt]. But other predictions are much more fluid and continuous: In my head I’m imagining water flowing over a waterfall, turning a waterwheel, which turns a shaft, which grinds flour between two millstones. I can see this happening – it’s not just a symbolic statement. I can feel the forces; I can hear the sound; I can imagine what will happen if the water flow gets too strong and the shaft snaps. Symbolic representations and simple linear associations won’t cut it to encode such predictive power. I have a real model of the laws of physics in my head, and can apply it to objects I’ve never even seen before, then imagine consequences that are accurate, visual and dynamic. So at one level, grammar is a good model for many kinds of association, including predictive associations, but at another it’s not. Are these the same processes – the same basic mechanism – just operating at different levels of abstraction, or are they different mechanisms?

These predictions are conditional. In the linguistic examples above, there’s always an IF and a set of conditionals. In the more fluid example of the imaginary waterfall, there are mathematical functions being expressed, and since a function has dependent variables, this is a conditional concept too. High-level motor actions are also conditional: walking consists of a sequence of associations between primitive actions, modulated by feedback and linked by conditional constructs such as “do until” or “do while”.

So, associations can be formed and broken, switched on and off, made dependent on other associations, apply specifically or broadly, embody sequence and timing and probability, form categories and hierarchies or link things without implying a unifying concept. They can implement rules and laws as well as facts. They may or may not be commutative. They can be manipulated top-down or formed bottom-up… SOMEHOW all this needs to be incorporated into a coherent scheme. I don’t need to understand how the entire human brain works – I’m just trying to create a highly simplified animal-like brain for a computer game. But brains do some impressive things (nine-tenths of which most AI researchers and philosophers forget about when they’re coming up with new theories). I need to find a representation and a set of mechanisms for defining associations that have many of these properties, so that my creatures can imagine possible futures, plan their day, get from A to B and generalize from past experiences. So far I don’t have any great ideas for a coherent and elegant scheme, but at least I have a list of requirements, now.

I think the next thing to do is think more about the kinds of representation I need – how best to represent and compute things like where the creature is in space, what kind of situation it is in, what the properties of objects are, how actions are performed. Even though I’d like most of this to emerge spontaneously, I should at least second-guess it to see what we might be dealing with. If I lay out a map of the perceptual and motor world, maybe the links between points on this map (representing the various kinds of associations) will start to make sense.

Or I could go for a run. Yes, I like that thought better.

Brainstorm #2

Ye Gods! I’d better get in quickly with a second installment – I’ve already written more words in replies to comments than there were in my first post. Thanks so much to all of you who have contributed comments already – I only posted it yesterday! I really appreciate it and I hope you’ll continue to add thoughts and observations.

Opening up my thought processes like this is a risky and sometimes painful thing to do, and I know from past experience that certain things tend to happen, so I’d like to make a few general observations to forestall any misunderstandings.

Firstly, I know a lot of you have your own ambitions, theories and hopes in this area, and I’ll do what I can to accommodate them or read your papers or whatever. But bear in mind that I can’t please everybody – I have to follow my own path. So if I don’t go in a direction you’d like me to go, I apologize. I’ll try to explain my reasoning but inevitably I’m going to have to make my own choices.

Secondly, I do this kind of work because I believe I have some worthwhile insights already. I’m not desperately looking for ideas or existing theories – the people who invented these ideas are perfectly welcome to write their own games. This is a tricky area, because I like it when someone says “have you thought of doing XXX?” but I’m not so interested in “have you seen YYY theory or ZZZ’s work?” I just don’t work that way – I prefer to think things through from first principles – and I’m writing this game largely to develop my own ideas, rather than with the pragmatic aim of writing a commercial application by bolting together other people’s.

Lastly, I invariably develop software alone. Nobody has offered to help or asked for this to be open source yet, but I know it’s coming. I don’t do collaborations. Collaborations have driven me crazy (and almost bankrupt) in the past. I know there are loads of people who would love to be part of a project like this, but all I can suggest is that you go off together and write one, because it’s not for me. I’m opening it up because I know people find it interesting and I wanted to share the design process, but I’m not interested in working on the actual code with others. It’s just not my thing.

Oh, and I do realize this is ambitious. I know it may not work. But I’m not as naive as I look, either. I’ve written four commercial games and at least a dozen commercial titles in other fields, so I’m pretty competent in terms of software development and product design. And I’ve been working in AI since the late 1970’s. Although it’s only my hobby, strictly speaking, I’m pretty well connected with the academic community and conversant with the state of the art. And I have an existence proof in Creatures, as long as you make allowances for the fact that I started writing it almost two decades ago. So don’t worry that I’m unwittingly being foolish and naive – I already know exactly how foolish I am!

Forgive me for saying these things up front – I really welcome and appreciate everybody’s support, thoughts, criticisms and general conversation. I just wanted to state a few ground rules, because it’s quite emotionally taxing to open up your innermost thought processes for inspection, and the provisional nature of everything can sometimes make it look like I’m floundering when really I’m just trucking along steadily.

Ok, so where to next? The features I mentioned yesterday were all aspects I’d like to see emerging from a common architecture. Jason admonished me to make sure I design a hierarchical brain, in which lower levels (equivalent to the thalamus and the brainstem) are fully functioning systems in their own right, and could be the complete brains of simpler animals as well as the evolutionary foundation for higher brain functions. I think this is important and a good point. The reptilian thalamus/limbic system probably works by manipulating more primitive reflexes in the brainstem. The cortex then unquestionably supervenes over the thalamus (for instance if we deliberately wish to look in a particular direction we quite probably do this by sending signals from the cortex (the frontal eye fields) to the superior colliculi of the thalamus, AS IF they were visual stimuli, thus causing the SC to carry out its normal unconscious duty of orienting the eyes towards a sudden movement). And finally, the prefrontal lobes of the cortex seem to supervene over an already functional set of subconscious impulses, motor and perceptual circuits in the rest of cortex, adding planning, the ability to defer reward, empathy and possibly subjective consciousness to the repertoire. So there are good reasons to follow this scheme myself.

But for now I’d like to think mostly about the cortical layer of the system. This is (perhaps) where memory plays the greatest role; where classification, categorization and generalization occur; and where prediction and the ability to generate simulations arises. I can assume that beneath this there are a bunch of reflexes and servoing subsystems that provide the outputs – I’ll worry about how to implement these later. But somehow I need to develop a coherent scheme for recognizing and classifying inputs and associating these with each other, both freely (as in “X reminds me of Y”) and causally (as in “if this is the trajectory that events have been taking, this is what I think will happen next”). Somehow these predictions need to iterate over time, so that the system can see into the future and ask “what if?” questions.

Let’s think about classification first. The ability to classify the world is crucial. It’s insufficient for intelligence, despite the huge number of neural nets, etc. that are nothing but classifier systems, but it’s necessary.

Here’s an assertion: let’s assume that the cortical surface is a map, such that, for any given permutation of sensory inputs, there will be a set of points on the surface that come to best represent that permutation.

It’s a set of points – a pattern – because I’m assuming this is a hierarchical system. If you hear a particular voice, a set of points of activity will light up in primary auditory cortex and elsewhere, representing the frequency spectrum of the voice, the time signature, the location, etc. Some other parts of auditory cortex will contain the best point to represent whose voice it is, based on those earlier points, or which word they just said. Other association areas deeper in the system will contain the points that best represent the combination of that person’s voice with their face, etc. Perhaps way off in the front there will be a point that best represents the entire current context – what’s going on. Other points in motor cortex represent things you might do about it, and they in turn will activate points lower down representing the muscle dispositions needed to carry out this action. So the brain will have a complex pattern of activation, but it’s reasonable to assert (I think) that EACH POINT ON THE CORTICAL SURFACE MAY BEST REPRESENT SOME GIVEN PERMUTATION OF INPUTS (INCLUDING CORTICAL ACTIVITY ELSEWHERE).

The cortex would therefore be a map of the state of the world. This is a neat assumption to work with, because it has several corollaries. For one thing, if the present state of the world is mapped out as such a pattern, then the future state, or the totally imagined state, or the intended state of the world can simultaneously be mapped out on the same real estate (perhaps using different cells in the same cortical columns). Having such a map allows the brain to specify world state in a variety of ways for a variety of reasons: sensation, perception, anticipation, intention, imagination and attention. Each is a kind of layer on the map, and they can be presumed to interact. So, for instance, the present state and recent past states give rise to the anticipated future state, via memories of probability derived from experience. Or attention can be guided by the sensory map and used to filter the perceptual or motor maps.

A second corollary might be that SIMILAR PERMUTATIONS TEND TO BE BEST REPRESENTED BY CLOSE NEIGHBORS. If this is true, then the system can generalize, simply by having some fuzziness in the neural activity pattern. If we experience a novel situation, it will give rise to activity centered over a unique point, but this point is close to other points representing similar, perhaps previously experienced situations. If we know how to react to them, we can guess that this is the best response to the novel situation too, and we can make use of this knowledge simply by stimulating all the points around the novel one.

When I say these are points on the cortical surface, I mean there will be an optimum point for each permutation, but the actual activity will be much more broad. I have a strong feeling that the brain works in a very convolved way – any given input pattern will activate huge swathes of neurons, but some more than others, such that the “center of gravity” of the activity is over the appropriate optimum point. I showed with Lucy that such large domes of activity can be used for both servoing and coordinate transforms (e.g. to orient the eyes and head towards a stimulus depending on where it is in the retinal field – a transform from retinal to head-centered coordinates). Smearing out the activity in this way also permits generalization, as above. But it’s a bummer to think about, because everything’s blurry and holographic!

I have some nagging issues about all this but for now I’ll run with it. It’s a neat mechanism, and if biology doesn’t work this way then it damn well ought! It’s a good starting point, anyway. Lots of things fall out of it.

And I already have a mechanism that works for the self-organization of primary visual cortex and may be more generally applicable to this “classification by mapping” scheme. But that, and some questions and observations about categories and the collapse of phase space, can wait for next time!

EDIT: Just a little footnote on veracity: I like to be inspired by biology but this doesn’t mean I follow it slavishly. So if I assert that perhaps the cortex acts like a series of overlaid maps, I’ll have done so because it’s plausible and there’s some supportive evidence. But please remember that this is an engineering project – I’m not saying the cortex DOES work like this; only that it’s reasonably consistent with the facts and provides a useful hunch for designing an artificial brain. It’s a way of inventing, not discovering. So sometimes I say cortex and mean the real thing, and sometimes I’m talking about my hypothetical engineered one. I ought to use inverted commas really, but I hope you’ll infer the distinction.

Brainstorm #1

Ok, here goes…

Life has been rather complicated and exhausting lately. Not all of it bad by any means; some of it really good, but still rather all-consuming. Nevertheless, it really is time that I devoted some effort to my work again. So I’ve started work on a new game (hooray! I hear you say ;-)). I have no idea what the game will consist of yet – just as with Creatures I’m going to create life and then let the life-forms tell me what their story is.

I wasted a lot of time writing Sim-biosis and then abandoning it, but I did learn a lot about 3D in the process. This time I’ve decided to swallow my pride and use a commercial 3D engine – Unity. (By the way, I’m writing for desktop environments – I need too much computer power for iPhone, etc.) Unity is the first 3D engine I’ve come across that supports C#.NET (well, Mono) scripting AND is actually finished and working, not to mention has documentation that gives developers some actual clue about the contents of the API. I have to jury-rig it a bit because most games have only trivial scripts and I need to write very complex neural networks and biochemistries, for which a simple script editor is a bit limiting, but the next version has debug support and hopefully will integrate even better with Visual Studio, allowing me to develop complex algorithms without regressing to the technology of the late 1970’s in order to debug them. So far I’m very impressed with Unity and it seems to be capable of at least most of the weird things that a complex Alife sim needs, as compared to running around shooting things, which is what game engines are designed for.

So, I need a new brain. Not me, you understand – I’ll have to muddle along with the one I was born with. I mean I need to invent a new artificial brain architecture (and eventually a biochemistry and genetics). Nothing else out there even begins to do what I want, and anyway, what’s the point of me going to all this effort if I don’t get to invent new things and do some science? It’s bad enough that I’m leaving the 3D front end to someone else.

I’ve decided to stick my neck out and blog about the process of inventing this new architecture. I’ve barely even thought about it yet – I have many useful observations and hypotheses from my work on the Lucy robots but nothing concrete that would guide me to a complete, practical, intelligent brain for a virtual creature. Mostly I just have a lot more understanding of what not to do, and what is wrong with AI in general. So I’m going to start my thoughts almost from scratch and I’m going to do it in public so that you can all laugh at my silly errors, lack of knowledge and embarrassing back-tracking. On the other hand, maybe you’ll enjoy coming along for the ride and I’m sure many of you will have thoughts, observations and arguments to contribute. I’ll try to blog every few days. None of it will be beautifully thought through and edited – I’m going to try to record my stream of consciousness, although obviously I’m talking to you, not to myself, so it will come out a bit more didactic than it is in my head.

So, where do I start? Maybe a good starting point is to ask what a brain is FOR and what it DOES. Surprisingly few researchers ever bother with those questions and it’s a real handicap, even though skipping it is often a convenient way to avoid staring at a blank sheet of paper in rapidly spiraling anguish.

The first thing to say, perhaps, is that brains are for flexing muscles. They also exude chemicals but predominantly they cause muscles to contract. It may seem silly to mention this but it’s surprisingly easy to forget. Muscles are analog, dynamical devices whose properties depend on the physics of the body. In a simulation, practicality overrules authenticity, so if I want my creatures to speak, for example, they’ll have to do so by sending ASCII strings to a speech synthesizer, not by flexing their vocal chords, adjusting their tongue and compressing their lungs. But it’s still important to keep in mind that the currency of brains, as far as their output is concerned, is muscle contraction. It’s the language that brains speak. Any hints I can derive from nature need to be seen in this light.

One consequence of this is that most “decisions” a creature makes are analog; questions of how much to do something, rather than what to do. Even high-level decisions of the kind, “today I will conscientiously avoid doing my laundry”, are more fuzzy and fluid than, say, the literature on action selection networks would have us believe. Where the brain does select actions it seems to do so according to mutual exclusion: I can rub my stomach and pat my head at the same time but I can’t walk in two different directions at once. This doesn’t mean that the rest of my brain is of one mind about things, just that my basal ganglia know not to permit all permutations of desire. An artificial lifeform will have to support multiple goals, simultaneous actions and contingent changes of mind, and my model needs to allow for that. Winner-takes-all networks won’t really cut it.

Muscles tend to be servo-driven. That is, something inputs a desired state of tension or length and then a small reflex arc or more complex circuit tries to minimize the difference between the muscle’s current state and this desired state. This is a two-way process – if the desire changes, the system will adapt to bring the muscle into line; if the world changes (e.g. the cat jumps out of your hands unexpectedly) then the system will still respond to bring things back into line with the unchanged goal. Many of our muscles control posture, and movement is caused by making adjustments to these already dynamic, homeostatic, feedback loops. Since I want my creatures to look and behave realistically, I think I should try to incorporate this dynamism into their own musculature, where possible, as opposed to simply moving joints to a given angle.

But this notion of servoing extends further into the brain, as I tried to explain in my Lucy book. Just about ALL behavior can be thought of as servo action – trying to minimize the differential between a desired state and a present state. “I’m hungry, therefore I’ll phone out for pizza, which will bring my hunger back down to its desired state of zero” is just the topmost level in a consequent flurry of feedback, as phoning out for pizza itself demands controlled arm movements to bring the phone to a desired position, or lift one’s body off the couch, or move a tip towards the delivery man. It’s not only motor actions that can be viewed in this light, either. Where the motor system tries to minimize the difference between an intended state and the present state by causing actions in the world, the sensory system tries to minimize the difference between the present state and the anticipated state, by causing actions in the brain. The brain seems to run a simulation of reality that enables it to predict future states (in a fuzzy and fluid way), and this simulation needs to be kept in train with reality at several contextual levels. It, too, is reminiscent of a battery of linked servomotors, and there’s that bidirectionality again. With my Lucy project I kept seeing parallels here, and I’d like to incorporate some of these ideas into my new creatures.

This brings up the subject of thinking. When I created my Norns I used a stimulus-response approach: they sensed a change in their environment and reacted to it. The vast bulk of connectionist AI takes this approach, but it’s not really very satisfying as a description of animal behavior beyond the sea-slug level. Brains are there to PREDICT THE FUTURE. It takes too long for a heavy animal with long nerve pathways to respond to what’s just happened (“Ooh, maybe I shouldn’t have walked off this cliff”), so we seem to run a simulation of what’s likely to happen next (where “next” implies several timescales at different levels of abstraction). At primitive levels this seems pretty hard-wired and inflexible, but at more abstract levels we seem to predict further into the future when we have the luxury, and make earlier but riskier decisions when time is of the essence, so that means the system is capable of iterating. This is interesting and challenging.

Thinking often (if not always) implies running a simulation of the world forwards in time to see what will happen if… When we make plans we’re extrapolating from some known future towards a more distant and uncertain one in pursuit of a goal. When we’re being inventive we’re simulating potential futures, sometimes involving analogies rather than literal facts, to see what will happen. When we reflect on our past, we run a simulation of what happened, and how it might have been different if we’d made other choices. We have an internal narrative that tracks our present context and tries to stay a little ahead of the game. In the absence of demands, this narrative can flow unhindered and we daydream or become creative. As far as I can see, this ability to construct a narrative and to let it freewheel in the absence of sensory input is a crucial element of consciousness. Without the ability to think, we are not conscious. Whether this ability is enough to constitute conscious awareness all by itself is a sticky problem that I may come back to, but I’d like my new creatures actively to think, not just react.

And talking about analogies brings up categorization and generalization. We classify our world, and we do it in quite sophisticated ways. As a baby we start out with very few categories – perhaps things to cry about and things to grab/suck. And then we learn to divide this space up into finer and finer, more and more conditional categories, each of which provokes finer and finer responses. That metaphor of “dividing up” may be very apposite, because spatial maps of categories would be one way to permit generalization. If we cluster our neural representation of patterns, such that similar patterns lie close to each other, then once we know how to react to (or what to make of) one of those patterns, we can make a statistically reasonable hunch about how to react to a novel but similar pattern, simply by stimulating its neighbors. There are hints that such a process occurs in the brain at several levels, and generalization, along with the ability to predict future consequences, are hallmarks of intelligence.

So there we go. It’s a start. I want to build a creature that can think, by forming a simulation of the world in its head, which it can iterate as far as the current situation permits, and disengage from reality when nothing urgent is going on. I’d like this predictive power to emerge from shorter chains of association, which themselves are mapped upon self-organized categories. I’d like this system to be fuzzy, so that it can generalize from similar experiences and perhaps even form analogies and metaphors that allow it to be inventive, and so that it can see into the future in a statistical way – the most likely future state being the most active, but less likely scenarios being represented too, so that contingencies can be catered for and the Frame Problem goes away (see my discussion of this in the comments section of an article by Peter Hankins). And I’d like to incorporate the notion of multi-level servomechanisms into this, such that the ultimate goals of the creature are fixed (zero hunger, zero fear, perfect temperature, etc.) and the brain is constantly responding homeostatically (and yet predictively and ballistically) in order to reduce the difference between the present state and this desired state (through sequences of actions and other adjustments that are themselves servoing).

Oh, and then there’s a bunch of questions about perception. In my Lucy project I was very interested in, but failed miserably to conquer, the question of sensory invariance (e.g. the ability to recognize a banana from any angle, distance and position, or at least a wide variety of them). Invariance may be bound up with categorization. This is a big but important challenge. However, I may not have to worry about it, because I doubt my creatures are going to see or feel or hear in the natural sense. The available computer power will almost certainly preclude this and I’ll have to cheat with perception, just to make it feasible at all. That’s an issue for another day – how to make virtual sensory information work in a way that is computationally feasible but doesn’t severely limit or artificially aid the creatures.

Oh yes, and it’s got to learn. All this structure has to self-organize in response to experience. The learning must be unsupervised (nothing can tell it what the “right answer” was, for it to compare its progress) and realtime (no separate training sessions, just non-stop experience of and interaction with the world).

Oh man, and I’d like for there to be the ability for simple culture and cooperation to emerge, which implies language and thus the transfer of thoughts, experience and intentions from one creature to another. And what about learning by example? Empathy and theory of mind? The ability to manipulate the environment by building things? OK, STOP! That’s enough to be going on with!

A shopping list is easy. Figuring out how to actually do it is going to be a little trickier. Figuring out how to do it in realtime, when the virtual world contains dozens of creatures and the graphics engine is taking up most of the CPU cycles is not all that much of a picnic either. But heck, computers are a thousand times faster than they were when I invented the Norns. There’s hope!

Ok, so, about this game thing…

If you look up into the night sky, just to the right of the bit that looks like a giant shopping cart, you’ll see a small blue star, called Sulis. Around it floats a stormy orange gas giant, and around that in turn swims a small moon, called Selene (until I come up with a nicer name).

selene2Selene is gravitationally challenged by all that whirling mass and hence is warm, comparatively wet and volcanic. It’s a craggy, canyon-filled landscape, by sheer coincidence remarkably similar to northern Arizona. The thin atmosphere contains oxygen, but sadly also much SO2 and H2S, making it hostile to earthly life without a spacesuit. But life it does contain! Spectroscopic analysis and photography from two orbiters have confirmed this (never mind how the orbiters got there – work with me, guys!)

There are hints of many species, some sessile, some motile. And just a little circumstantial evidence that one of these species may be moderately intelligent and perhaps even has a social structure. Your mission, should you wish to pay me a few dollars for the privilege, is to mount an expedition to Selene and study its biology and ecosystems. If at all possible I’d also like you to attempt contact with this shadowy sentient life-form.

Nothing is known (well, ok, I know it because I’m God, but I’m not telling you) about Selene’s ecosystems, geology, climate or, in particular, its biology. What is the food web? How do these creatures behave? What’s their anatomy? What niches do they occupy? How does their biochemistry work? How do they reproduce? Do they have something similar to DNA or does a different principle hold sway? What’s the likely evolutionary history? For the more intelligent creatures, what can be learned of their psychology, neurology and social behavior? Do they have language? Can we communicate with them? Are they dangerous? How smart are they? Do they have a culture? Do they have myths; religion? What does it all tell us?

You need to work together to build an encyclopedia – like Wikipedia – containing the results of your experiments, your observations and conclusions, stories, tips for exploration and research, maps, drawings, photos and all the rest. It will be a massive (I hope!), collaborative, Open Science experiment in exobiology…

So that’s the gist of what I’m working on. I was going to open a pet store and sell imported aliens but I decided it would be much more fun to build a virtual world you can actually step into, instead of watching through the bars of a cage. I’ll try to develop a whole new, self-consistent but non-earthlike biology, building on some of the things I learned from Creatures and my Lucy robot. I’ll discuss some of the technical issues on this blog but I’ll try not to give the game away – the point of the exercise is to challenge people to do real science on these creatures and deduce/infer this stuff for themselves. They/you did it admirably for Creatures but in those days I couldn’t give you anything as complex and comprehensive as I can now, and this time I don’t have marketing people breathing down my neck telling me that nobody’s interested in science.

I have no idea what the actual features will be, or to what extent it’ll be networked, etc. I’m just starting work on the terrain system and I have an awful long way to go. Because I’m working unfunded and have only a limited amount of money to live on, I’m going to work the other way round to most people, so instead of working to a spec I’ll squeeze in as many features as I can before the cash runs out. I know it’s absurd to hope to do all this in the space of a year to 18 months – after all, how many programmers and artists worked on Spore? Something like a hundred? But I think I’m as well equipped for the job as anyone, I work far more efficiently on my own, and it’s worth the attempt.

Whaddaya think?

Blast from the past

Well I never! I was chatting to Andrew Hugill, director of the Institute of Creative Technologies (where I used to have a research fellowship), about their interesting Virtual Romans project when suddenly I remembered that I’d created some virtual Romans myself once. Sort of.

Long before the Flood I wrote a game called Rome AD92, published by Maxis in the US (I forget who released it in the UK). It didn’t sell very well but underneath it were some interesting “AI” techniques, for that time anyway. So I had a quick Google to see if anyone had kept a review of it or anything and came across a whole series of YouTube videos by Necroscope86, playing the game from start to finish. Wow! Thanks, Necroscope! Isn’t the Information Age wonderful? The thing is, I’d almost completely forgotten the game myself, even though I spent a year of my life writing it, but there it is, preserved for posterity. It brought back memories. Do you think anyone on YouTube can remember where I left my car keys?

Follow

Get every new post delivered to your Inbox.

Join 697 other followers