Blowing my own trumpet

Okay, try not to cringe, but I really need your help. In the interests of full disclosure, that means money. Or if not money then influence. Please nicely.

I’ll just hit you with the funding pitch right off the bat. There’s a fancy widget I’m supposed to be able to embed in my blog but it doesn’t work in this theme, so here instead is a good old-fashioned hyperlink. Click on the image to take you to Kickstarter.

This is the first chance I’ve had to blog about it, because it’s taken off a lot more quickly than I expected and I’ve had a lot of people to thank and queries to field! It’s only the end of Day Two as I write this and the total is already over $11,000, much to my amazement and thanks especially to some extremely generous donors. I think there’s a real chance we can make this happen, with your help. Which is just as well, because I’ve almost completely used up my own resources after all these years of self-funded research and this is the only way I can continue with my work.

If you’ve already pledged then thank you SO MUCH! I really, really appreciate it. If you haven’t and you’d like to then that’s fantastic. My Creatures game inspired quite a lot of people to think differently about life, and even caused a number of them to take up scientific careers. I’m pretty sure this game will do the same, so it’s in a good cause as well as hopefully being fun. If you aren’t in a position to pledge then I quite understand – I’m not either! – but if you can help spread the word by tweeting, blogging, facebooking or pinning notices to telegraph poles then I really appreciate that too. The wider the news spreads, the more chance I have. Thank you.

Oh, and I see 600 people visited my blog today, which is a fair bit higher than usual, so if you came here via Kickstarter then I’m delighted to see you. I hope you’ll come back! 🙂

Incidentally, earlier posts about the design of the artificial brain for this project can be found here, here, here, here, here, here and here. After that I went a bit quiet because I got stuck on a problem that was too complex even to tell you about. But I think I have the answer to that now. After months of banging my head against the wall it just came to me – poof! – while I was driving through the desert thinking about something else. Don’t you just love it when that happens?

[Edit: I fixed the links – whoops.]

Brainstorm #1

Ok, here goes…

Life has been rather complicated and exhausting lately. Not all of it bad by any means; some of it really good, but still rather all-consuming. Nevertheless, it really is time that I devoted some effort to my work again. So I’ve started work on a new game (hooray! I hear you say ;-)). I have no idea what the game will consist of yet – just as with Creatures I’m going to create life and then let the life-forms tell me what their story is.

I wasted a lot of time writing Sim-biosis and then abandoning it, but I did learn a lot about 3D in the process. This time I’ve decided to swallow my pride and use a commercial 3D engine – Unity. (By the way, I’m writing for desktop environments – I need too much computer power for iPhone, etc.) Unity is the first 3D engine I’ve come across that supports C#.NET (well, Mono) scripting AND is actually finished and working, not to mention has documentation that gives developers some actual clue about the contents of the API. I have to jury-rig it a bit because most games have only trivial scripts and I need to write very complex neural networks and biochemistries, for which a simple script editor is a bit limiting, but the next version has debug support and hopefully will integrate even better with Visual Studio, allowing me to develop complex algorithms without regressing to the technology of the late 1970’s in order to debug them. So far I’m very impressed with Unity and it seems to be capable of at least most of the weird things that a complex Alife sim needs, as compared to running around shooting things, which is what game engines are designed for.

So, I need a new brain. Not me, you understand – I’ll have to muddle along with the one I was born with. I mean I need to invent a new artificial brain architecture (and eventually a biochemistry and genetics). Nothing else out there even begins to do what I want, and anyway, what’s the point of me going to all this effort if I don’t get to invent new things and do some science? It’s bad enough that I’m leaving the 3D front end to someone else.

I’ve decided to stick my neck out and blog about the process of inventing this new architecture. I’ve barely even thought about it yet – I have many useful observations and hypotheses from my work on the Lucy robots but nothing concrete that would guide me to a complete, practical, intelligent brain for a virtual creature. Mostly I just have a lot more understanding of what not to do, and what is wrong with AI in general. So I’m going to start my thoughts almost from scratch and I’m going to do it in public so that you can all laugh at my silly errors, lack of knowledge and embarrassing back-tracking. On the other hand, maybe you’ll enjoy coming along for the ride and I’m sure many of you will have thoughts, observations and arguments to contribute. I’ll try to blog every few days. None of it will be beautifully thought through and edited – I’m going to try to record my stream of consciousness, although obviously I’m talking to you, not to myself, so it will come out a bit more didactic than it is in my head.

So, where do I start? Maybe a good starting point is to ask what a brain is FOR and what it DOES. Surprisingly few researchers ever bother with those questions and it’s a real handicap, even though skipping it is often a convenient way to avoid staring at a blank sheet of paper in rapidly spiraling anguish.

The first thing to say, perhaps, is that brains are for flexing muscles. They also exude chemicals but predominantly they cause muscles to contract. It may seem silly to mention this but it’s surprisingly easy to forget. Muscles are analog, dynamical devices whose properties depend on the physics of the body. In a simulation, practicality overrules authenticity, so if I want my creatures to speak, for example, they’ll have to do so by sending ASCII strings to a speech synthesizer, not by flexing their vocal chords, adjusting their tongue and compressing their lungs. But it’s still important to keep in mind that the currency of brains, as far as their output is concerned, is muscle contraction. It’s the language that brains speak. Any hints I can derive from nature need to be seen in this light.

One consequence of this is that most “decisions” a creature makes are analog; questions of how much to do something, rather than what to do. Even high-level decisions of the kind, “today I will conscientiously avoid doing my laundry”, are more fuzzy and fluid than, say, the literature on action selection networks would have us believe. Where the brain does select actions it seems to do so according to mutual exclusion: I can rub my stomach and pat my head at the same time but I can’t walk in two different directions at once. This doesn’t mean that the rest of my brain is of one mind about things, just that my basal ganglia know not to permit all permutations of desire. An artificial lifeform will have to support multiple goals, simultaneous actions and contingent changes of mind, and my model needs to allow for that. Winner-takes-all networks won’t really cut it.

Muscles tend to be servo-driven. That is, something inputs a desired state of tension or length and then a small reflex arc or more complex circuit tries to minimize the difference between the muscle’s current state and this desired state. This is a two-way process – if the desire changes, the system will adapt to bring the muscle into line; if the world changes (e.g. the cat jumps out of your hands unexpectedly) then the system will still respond to bring things back into line with the unchanged goal. Many of our muscles control posture, and movement is caused by making adjustments to these already dynamic, homeostatic, feedback loops. Since I want my creatures to look and behave realistically, I think I should try to incorporate this dynamism into their own musculature, where possible, as opposed to simply moving joints to a given angle.

But this notion of servoing extends further into the brain, as I tried to explain in my Lucy book. Just about ALL behavior can be thought of as servo action – trying to minimize the differential between a desired state and a present state. “I’m hungry, therefore I’ll phone out for pizza, which will bring my hunger back down to its desired state of zero” is just the topmost level in a consequent flurry of feedback, as phoning out for pizza itself demands controlled arm movements to bring the phone to a desired position, or lift one’s body off the couch, or move a tip towards the delivery man. It’s not only motor actions that can be viewed in this light, either. Where the motor system tries to minimize the difference between an intended state and the present state by causing actions in the world, the sensory system tries to minimize the difference between the present state and the anticipated state, by causing actions in the brain. The brain seems to run a simulation of reality that enables it to predict future states (in a fuzzy and fluid way), and this simulation needs to be kept in train with reality at several contextual levels. It, too, is reminiscent of a battery of linked servomotors, and there’s that bidirectionality again. With my Lucy project I kept seeing parallels here, and I’d like to incorporate some of these ideas into my new creatures.

This brings up the subject of thinking. When I created my Norns I used a stimulus-response approach: they sensed a change in their environment and reacted to it. The vast bulk of connectionist AI takes this approach, but it’s not really very satisfying as a description of animal behavior beyond the sea-slug level. Brains are there to PREDICT THE FUTURE. It takes too long for a heavy animal with long nerve pathways to respond to what’s just happened (“Ooh, maybe I shouldn’t have walked off this cliff”), so we seem to run a simulation of what’s likely to happen next (where “next” implies several timescales at different levels of abstraction). At primitive levels this seems pretty hard-wired and inflexible, but at more abstract levels we seem to predict further into the future when we have the luxury, and make earlier but riskier decisions when time is of the essence, so that means the system is capable of iterating. This is interesting and challenging.

Thinking often (if not always) implies running a simulation of the world forwards in time to see what will happen if… When we make plans we’re extrapolating from some known future towards a more distant and uncertain one in pursuit of a goal. When we’re being inventive we’re simulating potential futures, sometimes involving analogies rather than literal facts, to see what will happen. When we reflect on our past, we run a simulation of what happened, and how it might have been different if we’d made other choices. We have an internal narrative that tracks our present context and tries to stay a little ahead of the game. In the absence of demands, this narrative can flow unhindered and we daydream or become creative. As far as I can see, this ability to construct a narrative and to let it freewheel in the absence of sensory input is a crucial element of consciousness. Without the ability to think, we are not conscious. Whether this ability is enough to constitute conscious awareness all by itself is a sticky problem that I may come back to, but I’d like my new creatures actively to think, not just react.

And talking about analogies brings up categorization and generalization. We classify our world, and we do it in quite sophisticated ways. As a baby we start out with very few categories – perhaps things to cry about and things to grab/suck. And then we learn to divide this space up into finer and finer, more and more conditional categories, each of which provokes finer and finer responses. That metaphor of “dividing up” may be very apposite, because spatial maps of categories would be one way to permit generalization. If we cluster our neural representation of patterns, such that similar patterns lie close to each other, then once we know how to react to (or what to make of) one of those patterns, we can make a statistically reasonable hunch about how to react to a novel but similar pattern, simply by stimulating its neighbors. There are hints that such a process occurs in the brain at several levels, and generalization, along with the ability to predict future consequences, are hallmarks of intelligence.

So there we go. It’s a start. I want to build a creature that can think, by forming a simulation of the world in its head, which it can iterate as far as the current situation permits, and disengage from reality when nothing urgent is going on. I’d like this predictive power to emerge from shorter chains of association, which themselves are mapped upon self-organized categories. I’d like this system to be fuzzy, so that it can generalize from similar experiences and perhaps even form analogies and metaphors that allow it to be inventive, and so that it can see into the future in a statistical way – the most likely future state being the most active, but less likely scenarios being represented too, so that contingencies can be catered for and the Frame Problem goes away (see my discussion of this in the comments section of an article by Peter Hankins). And I’d like to incorporate the notion of multi-level servomechanisms into this, such that the ultimate goals of the creature are fixed (zero hunger, zero fear, perfect temperature, etc.) and the brain is constantly responding homeostatically (and yet predictively and ballistically) in order to reduce the difference between the present state and this desired state (through sequences of actions and other adjustments that are themselves servoing).

Oh, and then there’s a bunch of questions about perception. In my Lucy project I was very interested in, but failed miserably to conquer, the question of sensory invariance (e.g. the ability to recognize a banana from any angle, distance and position, or at least a wide variety of them). Invariance may be bound up with categorization. This is a big but important challenge. However, I may not have to worry about it, because I doubt my creatures are going to see or feel or hear in the natural sense. The available computer power will almost certainly preclude this and I’ll have to cheat with perception, just to make it feasible at all. That’s an issue for another day – how to make virtual sensory information work in a way that is computationally feasible but doesn’t severely limit or artificially aid the creatures.

Oh yes, and it’s got to learn. All this structure has to self-organize in response to experience. The learning must be unsupervised (nothing can tell it what the “right answer” was, for it to compare its progress) and realtime (no separate training sessions, just non-stop experience of and interaction with the world).

Oh man, and I’d like for there to be the ability for simple culture and cooperation to emerge, which implies language and thus the transfer of thoughts, experience and intentions from one creature to another. And what about learning by example? Empathy and theory of mind? The ability to manipulate the environment by building things? OK, STOP! That’s enough to be going on with!

A shopping list is easy. Figuring out how to actually do it is going to be a little trickier. Figuring out how to do it in realtime, when the virtual world contains dozens of creatures and the graphics engine is taking up most of the CPU cycles is not all that much of a picnic either. But heck, computers are a thousand times faster than they were when I invented the Norns. There’s hope!

Ok, so, about this game thing…

If you look up into the night sky, just to the right of the bit that looks like a giant shopping cart, you’ll see a small blue star, called Sulis. Around it floats a stormy orange gas giant, and around that in turn swims a small moon, called Selene (until I come up with a nicer name).

selene2Selene is gravitationally challenged by all that whirling mass and hence is warm, comparatively wet and volcanic. It’s a craggy, canyon-filled landscape, by sheer coincidence remarkably similar to northern Arizona. The thin atmosphere contains oxygen, but sadly also much SO2 and H2S, making it hostile to earthly life without a spacesuit. But life it does contain! Spectroscopic analysis and photography from two orbiters have confirmed this (never mind how the orbiters got there – work with me, guys!)

There are hints of many species, some sessile, some motile. And just a little circumstantial evidence that one of these species may be moderately intelligent and perhaps even has a social structure. Your mission, should you wish to pay me a few dollars for the privilege, is to mount an expedition to Selene and study its biology and ecosystems. If at all possible I’d also like you to attempt contact with this shadowy sentient life-form.

Nothing is known (well, ok, I know it because I’m God, but I’m not telling you) about Selene’s ecosystems, geology, climate or, in particular, its biology. What is the food web? How do these creatures behave? What’s their anatomy? What niches do they occupy? How does their biochemistry work? How do they reproduce? Do they have something similar to DNA or does a different principle hold sway? What’s the likely evolutionary history? For the more intelligent creatures, what can be learned of their psychology, neurology and social behavior? Do they have language? Can we communicate with them? Are they dangerous? How smart are they? Do they have a culture? Do they have myths; religion? What does it all tell us?

You need to work together to build an encyclopedia – like Wikipedia – containing the results of your experiments, your observations and conclusions, stories, tips for exploration and research, maps, drawings, photos and all the rest. It will be a massive (I hope!), collaborative, Open Science experiment in exobiology…

So that’s the gist of what I’m working on. I was going to open a pet store and sell imported aliens but I decided it would be much more fun to build a virtual world you can actually step into, instead of watching through the bars of a cage. I’ll try to develop a whole new, self-consistent but non-earthlike biology, building on some of the things I learned from Creatures and my Lucy robot. I’ll discuss some of the technical issues on this blog but I’ll try not to give the game away – the point of the exercise is to challenge people to do real science on these creatures and deduce/infer this stuff for themselves. They/you did it admirably for Creatures but in those days I couldn’t give you anything as complex and comprehensive as I can now, and this time I don’t have marketing people breathing down my neck telling me that nobody’s interested in science.

I have no idea what the actual features will be, or to what extent it’ll be networked, etc. I’m just starting work on the terrain system and I have an awful long way to go. Because I’m working unfunded and have only a limited amount of money to live on, I’m going to work the other way round to most people, so instead of working to a spec I’ll squeeze in as many features as I can before the cash runs out. I know it’s absurd to hope to do all this in the space of a year to 18 months – after all, how many programmers and artists worked on Spore? Something like a hundred? But I think I’m as well equipped for the job as anyone, I work far more efficiently on my own, and it’s worth the attempt.

Whaddaya think?