So how’s it going?

Just a short post to say that I’m going to tweet my programming journal in real-time, as I work on my new game, so if any of you are fellow Twits, feel free to follow @enchantedloom. I don’t really understand Twitter yet, and 140 characters is just not ‘me’ somehow, but it seems like a good way to keep my nose to the grindstone (or avoid any actual work, possibly) and at the same time let you guys know how things are going. I’d appreciate the company, so see you in Twit-land maybe!

Brainstorm #1

Ok, here goes…

Life has been rather complicated and exhausting lately. Not all of it bad by any means; some of it really good, but still rather all-consuming. Nevertheless, it really is time that I devoted some effort to my work again. So I’ve started work on a new game (hooray! I hear you say ;-)). I have no idea what the game will consist of yet – just as with Creatures I’m going to create life and then let the life-forms tell me what their story is.

I wasted a lot of time writing Sim-biosis and then abandoning it, but I did learn a lot about 3D in the process. This time I’ve decided to swallow my pride and use a commercial 3D engine – Unity. (By the way, I’m writing for desktop environments – I need too much computer power for iPhone, etc.) Unity is the first 3D engine I’ve come across that supports C#.NET (well, Mono) scripting AND is actually finished and working, not to mention has documentation that gives developers some actual clue about the contents of the API. I have to jury-rig it a bit because most games have only trivial scripts and I need to write very complex neural networks and biochemistries, for which a simple script editor is a bit limiting, but the next version has debug support and hopefully will integrate even better with Visual Studio, allowing me to develop complex algorithms without regressing to the technology of the late 1970’s in order to debug them. So far I’m very impressed with Unity and it seems to be capable of at least most of the weird things that a complex Alife sim needs, as compared to running around shooting things, which is what game engines are designed for.

So, I need a new brain. Not me, you understand – I’ll have to muddle along with the one I was born with. I mean I need to invent a new artificial brain architecture (and eventually a biochemistry and genetics). Nothing else out there even begins to do what I want, and anyway, what’s the point of me going to all this effort if I don’t get to invent new things and do some science? It’s bad enough that I’m leaving the 3D front end to someone else.

I’ve decided to stick my neck out and blog about the process of inventing this new architecture. I’ve barely even thought about it yet – I have many useful observations and hypotheses from my work on the Lucy robots but nothing concrete that would guide me to a complete, practical, intelligent brain for a virtual creature. Mostly I just have a lot more understanding of what not to do, and what is wrong with AI in general. So I’m going to start my thoughts almost from scratch and I’m going to do it in public so that you can all laugh at my silly errors, lack of knowledge and embarrassing back-tracking. On the other hand, maybe you’ll enjoy coming along for the ride and I’m sure many of you will have thoughts, observations and arguments to contribute. I’ll try to blog every few days. None of it will be beautifully thought through and edited – I’m going to try to record my stream of consciousness, although obviously I’m talking to you, not to myself, so it will come out a bit more didactic than it is in my head.

So, where do I start? Maybe a good starting point is to ask what a brain is FOR and what it DOES. Surprisingly few researchers ever bother with those questions and it’s a real handicap, even though skipping it is often a convenient way to avoid staring at a blank sheet of paper in rapidly spiraling anguish.

The first thing to say, perhaps, is that brains are for flexing muscles. They also exude chemicals but predominantly they cause muscles to contract. It may seem silly to mention this but it’s surprisingly easy to forget. Muscles are analog, dynamical devices whose properties depend on the physics of the body. In a simulation, practicality overrules authenticity, so if I want my creatures to speak, for example, they’ll have to do so by sending ASCII strings to a speech synthesizer, not by flexing their vocal chords, adjusting their tongue and compressing their lungs. But it’s still important to keep in mind that the currency of brains, as far as their output is concerned, is muscle contraction. It’s the language that brains speak. Any hints I can derive from nature need to be seen in this light.

One consequence of this is that most “decisions” a creature makes are analog; questions of how much to do something, rather than what to do. Even high-level decisions of the kind, “today I will conscientiously avoid doing my laundry”, are more fuzzy and fluid than, say, the literature on action selection networks would have us believe. Where the brain does select actions it seems to do so according to mutual exclusion: I can rub my stomach and pat my head at the same time but I can’t walk in two different directions at once. This doesn’t mean that the rest of my brain is of one mind about things, just that my basal ganglia know not to permit all permutations of desire. An artificial lifeform will have to support multiple goals, simultaneous actions and contingent changes of mind, and my model needs to allow for that. Winner-takes-all networks won’t really cut it.

Muscles tend to be servo-driven. That is, something inputs a desired state of tension or length and then a small reflex arc or more complex circuit tries to minimize the difference between the muscle’s current state and this desired state. This is a two-way process – if the desire changes, the system will adapt to bring the muscle into line; if the world changes (e.g. the cat jumps out of your hands unexpectedly) then the system will still respond to bring things back into line with the unchanged goal. Many of our muscles control posture, and movement is caused by making adjustments to these already dynamic, homeostatic, feedback loops. Since I want my creatures to look and behave realistically, I think I should try to incorporate this dynamism into their own musculature, where possible, as opposed to simply moving joints to a given angle.

But this notion of servoing extends further into the brain, as I tried to explain in my Lucy book. Just about ALL behavior can be thought of as servo action – trying to minimize the differential between a desired state and a present state. “I’m hungry, therefore I’ll phone out for pizza, which will bring my hunger back down to its desired state of zero” is just the topmost level in a consequent flurry of feedback, as phoning out for pizza itself demands controlled arm movements to bring the phone to a desired position, or lift one’s body off the couch, or move a tip towards the delivery man. It’s not only motor actions that can be viewed in this light, either. Where the motor system tries to minimize the difference between an intended state and the present state by causing actions in the world, the sensory system tries to minimize the difference between the present state and the anticipated state, by causing actions in the brain. The brain seems to run a simulation of reality that enables it to predict future states (in a fuzzy and fluid way), and this simulation needs to be kept in train with reality at several contextual levels. It, too, is reminiscent of a battery of linked servomotors, and there’s that bidirectionality again. With my Lucy project I kept seeing parallels here, and I’d like to incorporate some of these ideas into my new creatures.

This brings up the subject of thinking. When I created my Norns I used a stimulus-response approach: they sensed a change in their environment and reacted to it. The vast bulk of connectionist AI takes this approach, but it’s not really very satisfying as a description of animal behavior beyond the sea-slug level. Brains are there to PREDICT THE FUTURE. It takes too long for a heavy animal with long nerve pathways to respond to what’s just happened (“Ooh, maybe I shouldn’t have walked off this cliff”), so we seem to run a simulation of what’s likely to happen next (where “next” implies several timescales at different levels of abstraction). At primitive levels this seems pretty hard-wired and inflexible, but at more abstract levels we seem to predict further into the future when we have the luxury, and make earlier but riskier decisions when time is of the essence, so that means the system is capable of iterating. This is interesting and challenging.

Thinking often (if not always) implies running a simulation of the world forwards in time to see what will happen if… When we make plans we’re extrapolating from some known future towards a more distant and uncertain one in pursuit of a goal. When we’re being inventive we’re simulating potential futures, sometimes involving analogies rather than literal facts, to see what will happen. When we reflect on our past, we run a simulation of what happened, and how it might have been different if we’d made other choices. We have an internal narrative that tracks our present context and tries to stay a little ahead of the game. In the absence of demands, this narrative can flow unhindered and we daydream or become creative. As far as I can see, this ability to construct a narrative and to let it freewheel in the absence of sensory input is a crucial element of consciousness. Without the ability to think, we are not conscious. Whether this ability is enough to constitute conscious awareness all by itself is a sticky problem that I may come back to, but I’d like my new creatures actively to think, not just react.

And talking about analogies brings up categorization and generalization. We classify our world, and we do it in quite sophisticated ways. As a baby we start out with very few categories – perhaps things to cry about and things to grab/suck. And then we learn to divide this space up into finer and finer, more and more conditional categories, each of which provokes finer and finer responses. That metaphor of “dividing up” may be very apposite, because spatial maps of categories would be one way to permit generalization. If we cluster our neural representation of patterns, such that similar patterns lie close to each other, then once we know how to react to (or what to make of) one of those patterns, we can make a statistically reasonable hunch about how to react to a novel but similar pattern, simply by stimulating its neighbors. There are hints that such a process occurs in the brain at several levels, and generalization, along with the ability to predict future consequences, are hallmarks of intelligence.

So there we go. It’s a start. I want to build a creature that can think, by forming a simulation of the world in its head, which it can iterate as far as the current situation permits, and disengage from reality when nothing urgent is going on. I’d like this predictive power to emerge from shorter chains of association, which themselves are mapped upon self-organized categories. I’d like this system to be fuzzy, so that it can generalize from similar experiences and perhaps even form analogies and metaphors that allow it to be inventive, and so that it can see into the future in a statistical way – the most likely future state being the most active, but less likely scenarios being represented too, so that contingencies can be catered for and the Frame Problem goes away (see my discussion of this in the comments section of an article by Peter Hankins). And I’d like to incorporate the notion of multi-level servomechanisms into this, such that the ultimate goals of the creature are fixed (zero hunger, zero fear, perfect temperature, etc.) and the brain is constantly responding homeostatically (and yet predictively and ballistically) in order to reduce the difference between the present state and this desired state (through sequences of actions and other adjustments that are themselves servoing).

Oh, and then there’s a bunch of questions about perception. In my Lucy project I was very interested in, but failed miserably to conquer, the question of sensory invariance (e.g. the ability to recognize a banana from any angle, distance and position, or at least a wide variety of them). Invariance may be bound up with categorization. This is a big but important challenge. However, I may not have to worry about it, because I doubt my creatures are going to see or feel or hear in the natural sense. The available computer power will almost certainly preclude this and I’ll have to cheat with perception, just to make it feasible at all. That’s an issue for another day – how to make virtual sensory information work in a way that is computationally feasible but doesn’t severely limit or artificially aid the creatures.

Oh yes, and it’s got to learn. All this structure has to self-organize in response to experience. The learning must be unsupervised (nothing can tell it what the “right answer” was, for it to compare its progress) and realtime (no separate training sessions, just non-stop experience of and interaction with the world).

Oh man, and I’d like for there to be the ability for simple culture and cooperation to emerge, which implies language and thus the transfer of thoughts, experience and intentions from one creature to another. And what about learning by example? Empathy and theory of mind? The ability to manipulate the environment by building things? OK, STOP! That’s enough to be going on with!

A shopping list is easy. Figuring out how to actually do it is going to be a little trickier. Figuring out how to do it in realtime, when the virtual world contains dozens of creatures and the graphics engine is taking up most of the CPU cycles is not all that much of a picnic either. But heck, computers are a thousand times faster than they were when I invented the Norns. There’s hope!

Ok, so, about this game thing…

If you look up into the night sky, just to the right of the bit that looks like a giant shopping cart, you’ll see a small blue star, called Sulis. Around it floats a stormy orange gas giant, and around that in turn swims a small moon, called Selene (until I come up with a nicer name).

selene2Selene is gravitationally challenged by all that whirling mass and hence is warm, comparatively wet and volcanic. It’s a craggy, canyon-filled landscape, by sheer coincidence remarkably similar to northern Arizona. The thin atmosphere contains oxygen, but sadly also much SO2 and H2S, making it hostile to earthly life without a spacesuit. But life it does contain! Spectroscopic analysis and photography from two orbiters have confirmed this (never mind how the orbiters got there – work with me, guys!)

There are hints of many species, some sessile, some motile. And just a little circumstantial evidence that one of these species may be moderately intelligent and perhaps even has a social structure. Your mission, should you wish to pay me a few dollars for the privilege, is to mount an expedition to Selene and study its biology and ecosystems. If at all possible I’d also like you to attempt contact with this shadowy sentient life-form.

Nothing is known (well, ok, I know it because I’m God, but I’m not telling you) about Selene’s ecosystems, geology, climate or, in particular, its biology. What is the food web? How do these creatures behave? What’s their anatomy? What niches do they occupy? How does their biochemistry work? How do they reproduce? Do they have something similar to DNA or does a different principle hold sway? What’s the likely evolutionary history? For the more intelligent creatures, what can be learned of their psychology, neurology and social behavior? Do they have language? Can we communicate with them? Are they dangerous? How smart are they? Do they have a culture? Do they have myths; religion? What does it all tell us?

You need to work together to build an encyclopedia – like Wikipedia – containing the results of your experiments, your observations and conclusions, stories, tips for exploration and research, maps, drawings, photos and all the rest. It will be a massive (I hope!), collaborative, Open Science experiment in exobiology…

So that’s the gist of what I’m working on. I was going to open a pet store and sell imported aliens but I decided it would be much more fun to build a virtual world you can actually step into, instead of watching through the bars of a cage. I’ll try to develop a whole new, self-consistent but non-earthlike biology, building on some of the things I learned from Creatures and my Lucy robot. I’ll discuss some of the technical issues on this blog but I’ll try not to give the game away – the point of the exercise is to challenge people to do real science on these creatures and deduce/infer this stuff for themselves. They/you did it admirably for Creatures but in those days I couldn’t give you anything as complex and comprehensive as I can now, and this time I don’t have marketing people breathing down my neck telling me that nobody’s interested in science.

I have no idea what the actual features will be, or to what extent it’ll be networked, etc. I’m just starting work on the terrain system and I have an awful long way to go. Because I’m working unfunded and have only a limited amount of money to live on, I’m going to work the other way round to most people, so instead of working to a spec I’ll squeeze in as many features as I can before the cash runs out. I know it’s absurd to hope to do all this in the space of a year to 18 months – after all, how many programmers and artists worked on Spore? Something like a hundred? But I think I’m as well equipped for the job as anyone, I work far more efficiently on my own, and it’s worth the attempt.

Whaddaya think?

I refute it thus… Ouch!

Following on from the John Searle interview, Paul Almond and I have been having quite a lengthy discussion about what reality means. Usually I’m the extreme one because of my argument that some things that exist only in computers are just as real as those that exist in the so-called physical world, but this time I seem to be the moderate (or reactionary?) one because Paul believes everything is real and his office chair is Albert Einstein (well, sort of, anyway). If you’re interested, the conversation is over at Machines Like Us.

Mystic Pizza

Norm Nason and Paul Almond, over at Machines Like Us, have managed to pull quite a coup and conduct a long and fascinating interview with the philosopher John Searle, on his Chinese Room argument and others.

As anyone who’s read my books may have surmised, I don’t agree with all of Searle’s arguments and I don’t share his disbelief in the possibility of Strong AI (even though I doubt very much that a digital computer is a practical medium for such a thing, long-term). But rather than discuss it here I’ve posted a long comment on the original site. It’s too big a subject to tackle in a blog post really, let alone a comment to one, so maybe I’ll have to write another book. I can’t make up my mind whether I next want to write a book called “Machines like us” (Norm borrowed the title for his site from one of my talks), about mechanism and the human condition, or whether to write one about “Un-physics” – a more general elucidation of a process-oriented view of nature, the behavior of complex feedback systems and self-organization. Does anyone care either way? I don’t suppose so.

Anyway, Paul’s excellent interview with John Searle can be found here, and my somewhat inept and hurried attempts to put forward an alternative view are here. Enjoy.

Bumpy landings for nerds everywhere

I just heard that Microsoft has shut down its Flight Simulator dev team, Aces Studio. It’s not clear whether this is really the end of Flightsim or just a reorganization, but hell! I’d much rather they’d kept Flightsim and shut down Windows.

I don’t do that sort of thing now – at the age of 51 I’ve sort of grown out of it. Kinda. Mostly. But I’ve had every version of FS from V1 to X and loved them all. There are two things about computers that I deeply adore: artificial intelligence and virtual worlds (and even my approach to AI involves virtual worlds in at least three fundamental ways). I love the way we can program a computer to contain a space – a place with its own history and reality for us to explore. And nothing exemplifies that better than Flightsim.

The Flightsim world is just there. There’s no Yerhafters (as in “first yerhafter shoot the troll, then yerhafter say the magic charm”). There’s just a world, scenery to look at, airports to crash into and planes to learn to fly. And it did teach me to fly – after a few years’ practice with FS1 and FS2 it was pretty easy to get my pilot’s licence. I love the way you can make your own challenges, or just freewheel in the clouds. It’s like the way Lego used to be before the human attention span dropped to zero and Lego had to start selling little specialized packs with instructions, because nobody had an imagination any more.

I have such fond memories of the first Flightsim: taking off from Chicago Meigs, or passing over exotic sounding places like Snohomish and Everett. I swear it got colder as I flew north. Of course, less imaginative people could only see a few wireframe boxes in magenta and cyan, surrounding a couple of converging lines. But I knew I was on final approach into Sea-tac on minimums and if I screwed it up it was seriously going to hurt.

And then as the scenery got better and the aircraft more sophisticated, I used to love to step out of real life and go visit somewhere new. I saw India long before I went there for real. I’ve travelled up the Nile, buzzed the Hong Kong skyline and visited a hundred places I’ve not yet been to in this world but will one day. Thank heavens Google Earth is there to fill that gap. It even has a flightsim mode, but nothing to compare with spooling up a couple of Rolls-Royce engines and setting the navs for a night flight to Rome. 

I’m sure there will be bigger, better flightsims to come, maybe even from Microsoft, but it certainly seems like the end of an era.