If anyone’s wondering why they can’t get to the Grandroids pages on Kickstarter, it’s because they’re having server upgrade problems. Should be back online soonish.

[Edit: Ok, they’re back up again now.]

Of camels and committees

I did a Biota podcast last night and Tom understandably asked me a little about my views on open source and collaborative development. I didn’t give a very good answer, but the subject keeps coming up, lately, so I thought I’d write a post about it to try to explain my position. People want to know why I don’t plan to develop my game as open source. Why don’t I collaborate with others (often specifically the person asking the question) and hence do a far better job than I can possibly do on my own? Why am I so opposed to teamwork? Why am I so stuck up and antisocial? (Alright, nobody actually asks that, but sometimes I suspect that’s what they’re thinking.)

I’m really not opposed to collaboration. Not at all. Nor open source. It just doesn’t work well for me personally, and in particular for this application. Collaboration is the norm, so it’s not like I’m discriminating against a minority here. It’s practically compulsory in many areas. Just try getting a European Commission science grant without including at least three different countries in the team. If it weren’t for Kickstarter and you lovely generous people I’d have little hope of getting my work funded at all, and for over a decade I’ve had to fund most of it myself. But that doesn’t mean collaboration is necessarily always the best way to go about things.

In the case of my Grandroids project, writing a computer game isn’t the objective, it’s the intended outcome. These are actually very different things. For instance, the intended outcome for the Kon-Tiki expedition was to arrive at the Tuamoto Islands, but it wasn’t the objective. If it was the objective then Thor Heyerdahl could simply have got on a plane. Any decent pan-European research collaboration could have told him that. At least after a few committee meetings to thrash out the reporting requirements.

If the game I’m writing was merely the objective then a bunch of us could sit down and discuss how we were going to achieve it. But for me it’s very much the other way round. I already have a theory that I’m trying to develop, and the game is intended to be an entertaining and useful expression of that theory. But the theory is in my head; it isn’t fully developed yet, and so I can’t delegate parts of it or even explain it properly to people. It therefore has to be a conversation between me and a computer.

And it’s not like I can even farm out the peripheral stuff. Not yet, anyway. The graphics and physics engines could be farmed out if it weren’t for the fact that they’re already written and I’ve bought the licence (in any case, without them I couldn’t do my part, so they had to come first). Even the 3D creature design is a biological issue, not predominantly an artistic one, because I’m using the physics engine and virtual muscles to control it, rather than conventional animation, so the weight distribution and anatomy have to work hand-in-hand with the muscle control system, which in turn is very co-dependent on how the brain is developed. If someone designs a beautiful creature but when I plug it into my code it keeps falling over, it’s not going to be held up by Art alone. Whereas if I develop the 3D art as well as designing the low-level postural control in the brain, my left hand can learn from my right and vice versa. These iterations occur on a minute-by-minute basis and I get a direct, personal insight into both the art and neuroscience problems that I would never have been able to take advantage of if someone else had done the graphics. This is why I’ve been building robots by myself, too. It was developing the electronics and signal processing that gave me insights and ideas into how the human brain might work, and it was neuroscience and biology that gave me new ideas about how to design the electronics and mechanics. Those intimate connections between apparently disparate ideas are the fuel for creativity. The creative act is primarily an act of analogy.

And all that has to happen inside a single brain, because in the brain ideas can connect up in myriad ways that aren’t confined to language and drawings. I don’t have any translation problems in my head; I don’t send memos to myself and then misread them; I understand every single word I say, which is rarely the case when I’m discussing things with other people. If I was a painter, this would be far more self-evident. It’s not like  Michelangelo could have restricted himself to painting the faces on the Sistine Chapel ceiling while other team members chose the layout, focus-grouped the storyline, painted the arms, etc. It had to be a single creative act. Although now that I think about it, perhaps that explains the Venus de Milo…

In computing terms it’s somewhat similar to Linux. Zillions of people can maintain Linux and add to it now, but the core of it had to come out of Linus Torvalds’s head. Yet, even then, people already knew what an operating system was and roughly how to go about designing one. That’s far from the case in AI. We know hundreds of ways not to do it, but how to actually achieve it is still an open question. There are plenty of other, often well-funded attempts to sit round a table and figure out how to create AGI collaboratively, so if that’s the best way to go about it we’ll soon find out. But sometimes a better way to search an area is for everyone to spread out and follow their own nose. I have a specific route that I want to follow, I can’t explain it to anyone else in a way that would enable them to see exactly what I have in my mind, so it’s best for me if I just stay in my hermitage and write code. Sometimes code is the best way to explain an idea.

So, I really have nothing against collaboration or open source software per se, although if you’d asked me that yesterday morning, while I was up to my neck in CentOS, I might well have given a different answer.

Mappa Psyche

I’m kind of feeling my way, here, trying to work out how to explain a lifetime of treading my own path, and the comments to yesterday’s post have shown me just how far apart we all wander in our conceptual journey through life. It’s difficult even to come to shared definitions of terms, let alone shared concepts. But such metaphors as ‘paths’ and ‘journeys’ are actually quite apt, so I thought I’d talk a little about the most important travel metaphor by far that underlies the work I’m doing: the idea of a map.

This is trivial stuff. It’s obvious. BUT, the art of philosophy is to state the blindingly obvious (or at least, after someone has actually stated it, everyone thinks “well that’s just blindingly obvious; I could have thought of that”), so don’t just assume that because it’s obvious it’s not profound!

So, imagine a map – not a road atlas but a topographical map, with contours. A map is a model of the world. It isn’t a copy of the world, because the contours don’t actually go up and down and the map isn’t made from soil and rock. It’s a representation of the world, and it’s a representation with some crucial and useful correspondences to the world.

To highlight this, think of a metro map instead, for a moment. I think the London Underground map was the first to do this. A metro map is a model of the rail network, but unlike a topographic map it corresponds to that network only in one way – stations that are connected by lines on the map are connected by rails underground. In every other respect the map is a lie. I’m not the only person to have found this out the hard way, by wanting to go from station A to station B and spending an hour travelling the Tube and changing lines, only to discover when I got back to the surface that station B was right across the street from station A! A metro map is an abstract representation of connectivity and serves its purpose very well, but it wouldn’t be much use for navigating above ground.

A topographical map corresponds to space in a much more direct way. If you walk east from where you are, you’ll end up at a point on the map that is to the right of the point representing where you started. Both kinds of map are maps, obviously, but they differ in how the world is mapped onto them. Different kinds of mapping have different uses, but the important point here is that both retain some useful information about how the world works. A map is not just a description of a place, it’s also a description of the laws of geometry (or in the case of metro maps, topology). In the physical world we know that it’s not possible to move from A to B without passing through the points in-between, and this fact is represented in topographical maps, too. Similarly, if a map’s contours suddenly become very close together, we know that in the real world we’ll find a cliff at this point, because the contours are expressing a fact about gradients.

So a map is a model of how the world actually functions, albeit at such a basic level that it might not even occur to you that you once had to learn these truths for yourself, by observation and trial-and-error. It’s not just a static representation of the world as it is, it also encodes vital truths about how one can or can’t get from one place to another.

And of course someone has to make it. Actually moving around on the earth and making observations of what you can see allows you to build a map of your experiences. “I walked around this corner and I saw a hill over there, so I shall record it on my map.” A map is a memory.

Many of the earliest maps we know of have big gaps where knowledge didn’t exist, or vague statements like “here be dragons”. And many of them are badly distorted, partly because people weren’t able to do accurate surveys, and partly because the utility of n:1 mapping hadn’t completely crystallized in people’s minds yet (in much the same way that early medieval drawings tend to show important people as larger than unimportant ones). So maps can be incomplete, inaccurate and misguided, just like memories, but they still have utility and can be further honed over time.

Okay, so a map is a description of the nature of the world. Now imagine a point or a marker on this map, representing where you are currently standing. This point represents a fact about the current state of the world. The geography is relatively fixed, but the point can move across it. Without the map, the point means nothing; without the point, the map is irrelevant. The two are deeply interrelated.

A map enables a point to represent a state. But it also describes how that state may change over time. If the point is just west of a high cliff face, you know you can’t walk east in real life. If you’re currently at the bottom-left of the map you know you aren’t going to suddenly find yourself at the top-right without having passed through a connected series of points in-between. Maps describe possible state transitions, although I’m cagey about using that term, because these are not digital state transitions, so if you’re a computery person, don’t allow your mind to leap straight to abstractions like state tables and Hidden Markov Models!

And now, here’s the blindingly obvious but really, really important fact: If a point can represent the current state of the world, then another point can represent a future state of the world; perhaps a goal state – a destination. The map then contains the information we need in order to get us from where we are to where we want to go.

Alternatively, remembering that we were once at point A and then later found ourselves at point B, enables us to draw the intervening map. If we wander around at random we can draw the map from our experiences, until we no longer have to wander at random; we know how to get from where we are to where we want to go. The map has learned.

Not only do we know how to get from where we are to where we want to go, but we also know something about where we are likely to end up next – the map permits us to make predictions. Furthermore, we can contemplate a future point on the map and consider ways to get there, or look at the direction in which we are heading and decide whether we like the look of where we’re likely to end up. Or we can mark a hazard that we want to avoid – “Uh-oh, there be dragons!”. In each case, we are using points on the map to represent a) our current state, and b) states that could exist but aren’t currently true – in other words, imaginary states. These may be states to seek, to avoid or otherwise pay attention to, or they might just be speculative states, as in “thinking about where to go on vacation”, or “looking for interesting places”, or even simply “dropping a pin in the map, blindfold.” They can also represent temporarily useful past states, such as “where I left my car.” The map then tells us how the world works in relation to our current state, and therefore how this relates functionally to one of these imagined states.

By now I imagine you can see some important correspondences – some mappings – between my metaphor and the nature of intelligence. Before you start thinking “well that’s blindingly obvious, I want my money back”, there’s a lot more to my theories than this, and you shouldn’t take the metaphor too literally. To turn this idea into a functioning brain we have to think about multiple maps; patterns and surfaces rather than points; map-to-map transformations with direct biological significance; much more abstract coordinate spaces; functional and perceptual categorization; non-physical semantics for points, such as symbols; morphs and frame intersections; neural mechanisms by which routes can be found and maps can be assembled and optimized… Turning this metaphor into a real thinking being is harder than it looks – it certainly took me by surprise! But I just wanted to give you a basic analogy for what I’m building, so that you have something to place in your own imagination. By the way, I hesitate to mention this, but analogies are maps too!

I hope this helps. I’ll probably leave it to sink in for a while, at least as far as this blog is concerned, and start to fill in the details later, ready for my backers as promised. I really should be programming!

Introduction to an artificial mind

I don’t want to get technical right now, but I thought I’d write a little introduction to what I’m actually trying to do in my Grandroids project. Or perhaps what I’m not trying to do. For instance, a few people have asked me whether I’ll be using neural networks, and yes, I will be, but very probably not of the kind you’re expecting.

When I wrote Creatures I had to solve some fairly tricky problems that few people had thought much about before. Neural networks have been around for a long time, but they’re generally used in very stylized contexts, to recognize and classify patterns. Trying to create a creature that can interact with the world in real-time and in a natural way is a very different matter. For example, a number of researchers have used what are called randomly recurrent networks to evolve simple creatures that can live in specialized environments, but mine was a rather different problem. I wanted people to care about their norns and have some fun interacting with them. I didn’t expect people to sit around passively watching hundreds of successive generations of norns blundering around the landscape, in the hope that one would finally evolve the ability not to bump into things.

Norns had to learn during their own lifetimes, and they had to do so while they were actively living out their lives, not during a special training session. They also had to learn in a fairly realistic manner in a rich environment. They needed short- and long-term memories for this, and mechanisms to ensure that they didn’t waste neural real-estate on things that later would turn out not to be worth knowing. And they needed instincts to get them started, which was a bit of a problem because this instinct mechanism still had to work, even if the brains of later generations of norns had evolved beyond recognition. All of these were tricky challenges and it required a little ingenuity to make an artificial brain that was up to the task.

So at one level I was reasonably happy with what I’d developed, even though norns are not exactly the brightest sparks on the planet. At least it worked, and I hadn’t spent five years working for nothing. But at another level I was embarrassed and deeply frustrated. Norns learn, they generalize from their past to help them deal with novel situations, and they react intelligently to stimuli. BUT THEY DON’T THINK.

It may not be immediately obvious what the difference is between thinking and reacting, because we’re rarely aware of ourselves when we’re not thinking and yet at the same time we don’t necessarily pay much attention to our thoughts. In fact the idea that animals have thoughts at all (with the notable exception of us, of course, because we all know how special we are) became something of a taboo concept in psychology. Behaviorism started with the fairly defensible observation that we can’t directly study mental states, and so we should focus our attention solely on the inputs and outputs. We should think of the brain as a black box that somehow connects inputs (stimuli) with outputs (actions), and pay no attention to intention, because that was hidden from us. The problem was that this led to a kind of dogma that still exists to some extent today, especially in behavioral psychology. Just because we can’t see animals’ intentions and other mental states, this doesn’t mean they don’t have any, and yet many psychological and neurological models have been designed on this very assumption. Including the vast bulk of neural networks.

But that’s not what it’s like inside my head, and I’m sure you feel the same way about yours. I don’t sit here passively waiting for a stimulus to arrive, and then just react to it automatically, on the basis of a learned reflex. Sometimes I do, but not always by any means. Most of the time I have thoughts going through my mind. I’m watching what’s going on and trying to interpret it in the light of the present context. I’m worrying about things, wondering about things, making plans, exploring possibilities, hoping for things, fearing things, daydreaming, inventing artificial brains…

Thinking is not reacting. A thought is not a learned reflex. But nor is it some kind of algorithm or logical deduction. This is another common misapprehension, both within AI and among the general public. Sometimes, thinking equates to reasoning, but not most of the time. How often do you actually form and test logical propositions in your head? About as often as you perform formal mathematics, probably. And yet artificial intelligence was founded largely on the assumption that thinking is reasoning, and reasoning is the logical application of knowledge. Computers are logical machines, and they were invented by extrapolation from what people (or rather mathematicians, which explains a lot) thought the human mind was like. That’s why we talk about a computer’s memory, instructions, rules, etc. But in truth there is no algorithm for thought.

So a thought is not a simple learned reflex, and it’s not a logical algorithm. But what is it? How do the neurons in the brain actually implement an idea or a hope? What is the physical manifestation of an expectation or a worry? Where does it store dreams? Why do we have dreams? These are some of the questions I’ve been asking myself for the past 15 years or so. And that’s what I want to explore in this project. Not blindly, I should add – it’s not like I’m sitting here today thinking how cool it will be to start coming up with ideas. I already have ideas; quite specific ones. There are gaps yet, but I’m confident enough to stick my neck out and say that I have a fair idea what I’m doing.

Explaining how my theories work and what that means for the design of neural networks that can think, are things that will take some explaining. But for now I just wanted to let you know the key element of this project. My new creatures will certainly be capable of evolving, but evolution is not what makes them intelligent and it’s not the focus of the game. They’ll certainly have neural network brains, but nothing you may have learned about neural networks is likely to help you imagine what they’re going to be like; in fact it may put you at a disadvantage! The central idea I’m exploring is mental imagery in its broadest sense – the ability for a virtual creature to visualize a state of the world that doesn’t actually exist at that moment. I think there are several important reasons why such a mechanism evolved, and this gives us clues about how it might be implemented. Incidentally, consciousness is one of the consequences. I’m not saying my creatures will be conscious in any meaningful way, just that without imagery consciousness is not possible. In fact without imagery a lot of the things that AI has been searching for are not possible.

So, in short, this is a project to implement imagination using virtual neurons. It’s a rather different way of thinking about artificial intelligence, I think, and it’s going to be a struggle to describe it, but from a user perspective I think it makes for creatures that you can genuinely engage with. When they look at you, there will hopefully be someone behind their eyes in a way that wasn’t true for norns.

I’m funded!!!! Yippee!!!!

Today has been a bit thrilling, I have to say! Pledges to my Kickstarter project had begun to tail off a bit, as expected, although amazingly I was still on target to reach my goal in a couple of days. And then someone posted about it on slashdot and all my fellow geeks, many of whom happened to be leaving the Games Developer’s Conference at that moment, suddenly got to hear of it. Whoosh!

So I’m funded! My life’s work can continue! I get my chance to show you all what I’ve been thinking about this past 15 years, and I promise it’s really rather interesting.

People have already pledged more than I asked for and pledges are still coming in, so if that continues for a little while yet I’ll feel a lot more comfortable about the future and able to buy the tools I need to do a good job. It will also help plug the gap between releasing the software and seeing any new revenue from it.

Not only am I funded, but I’m funded by some incredibly nice people, who are doing it because they believe in the same things that I do, and they want to get the chance to play with the results, not because they want to make money out of me. That feels really good. When things go wrong now, as they surely will from time to time, it’ll be my fault, and not because investors are getting nervous, or people don’t deliver on time, or a publisher is interfering with the design. It may seem perverse but I’m really much happier when it’s my fault and therefore something I have control over.

Anyway, the Kickstarter period is not over yet. There are still 34 days to go! When I started I wasn’t at all sure that this would be enough; now it seems hilarious that we all have to sit here and wait! I’ll say my thanks properly after the project closes, but for now, thank you all so much for your support, whether it was (or will be) money, publicity or good wishes. Love and gratitude to my old friends and hello and thank you to my new ones.


Grandroids FAQ

I’m putting FAQs for my Kickstarter project here, so that I can add to them without bothering everyone with updates. Oh, damn! I’ve already thought of another one… So if you have a question, check here first! I’ll add a new blog category.

1. Linux: Several people have asked if I’m going to support Linux. I’m committed to using Unity3D as my graphics engine (I chose it very carefully, and I really don’t think I could make this project happen without Unity). At the moment Unity doesn’t support Linux. It does support Windows, Mac, iPhone, Android, X-Box and Wii, so it’s certainly not impossible they’ll support Linux eventually too. In fact the underlying framework is already very Linux-friendly, so it shouldn’t be too difficult if they think there’s a market. A number of Unity developers have asked for it. However, it’s not something I have any control over. If Unity offers Linux support then I’ll definitely port the game to Linux too, but I can’t do anything until/unless that happens.

2. Collaboration: People have offered to help with the project in various ways, which I’m very flattered by. Thank you. The situation is this: As far as the core engine is concerned, I have to work alone. The computational neuroscience and biology involved is very, very complex and unique, and it has an impact on almost every aspect of the code (and even the graphics). There’s no way I could do this stuff in a collaborative environment. I have to keep everything inside my head, because I’m inventing completely new things as I go, and every time one part of it changes, it has knock-on effects throughout the system. So I’m just not in a position to share the core programming with anyone. Sorry.

Having said that, I’m writing an engine, at both the computing and biological levels. It will have an open API and an open genetics, so everyone is free to write new tools, create new objects and scenes, manipulate genes, create new species, etc. and I’d be delighted if you would do that. This is my living, so I need to retain some of the action, but if you had any connection to Creatures you’ll know that I design things in such a way that people can contribute. This project will be more open than Creatures was, because the technology for it has come a long way since then. Some of this may take a while to roll out, but I’ll be publishing updates as time goes on.

3. The AI: Is it for real? Sure it’s for real! But before anyone who’s not familiar with my work gets the wrong idea, I should point out that these creatures are not going to win Jeopardy! The field I work in is biologically-inspired AI, and I make complex, realistic living organisms. Think rabbits and dogs, not Terminator or Data. Most people don’t really question the nature of intelligence much, but I can tell you, winning a game of chess is easy peasy compared to recognizing the difference between a pawn and a bishop, or picking up the chess pieces. Just because we find something easy now, after years of infant practice, it doesn’t mean it IS easy. Most AI is not real intelligence at all. Especially game AI, which is to intelligence what a portrait is to a person – a shallow imitation of the real thing. What I’m interested in is real, learned intelligence and hopefully the first glimmerings of a real mind, with desires and fears and intentions. It’s much more exciting than a pseudo-HAL.

4. Timing, features, etc. I’m banking on this taking about another year. Hopefully I’ll get enough money to go on a little longer than that and do a better job. I don’t know how long until I have alphas, betas, etc. There’s a lot of very new stuff in this project so I don’t have a precedent. I don’t know what I’ll actually be able to achieve either. I’ve found that the key is to get the biology right. Biology is an incredibly powerful toolkit, and very flexible. Get that core right and lots of happy things will fall out of it. So I don’t work in the normal way, with specifications and schedules and milestones. It cramps my style. My job is to be a good biologist and let the creatures emerge. This is all about emergence.

5. Helping out: Some of you have said you don’t have any money but you’ll spread the word. Great! Thank you! I don’t have any money either, so I quite understand. I appreciate all tweets, posts, articles, submissions, reviews… anything. Well, perhaps not holding a knife to someone and stealing their wallet, but most things. I appreciate all kinds of support, even if just good wishes. Oh, and I read every single comment, etc., so I notice and care, even if I don’t get a chance to reply personally.

6. The name: I had to pick a project name for Kickstarter, so I went with Grandroids because I like it (thanks to Andrew Lovelock for coming up with it!). But I see this as a kind of brand name to describe what I “purvey” in general terms. The game will almost certainly be called something else, but I don’t have a clue what, yet. It depends how the creatures turn out and what world they tell me they want to live in.

7. What will the creatures look like? Dunno. In my head the stars of the show are rather like orangutan babies – fairly shy, semi-bipedal, cute, slightly shadowy creatures whose confidence you have to work hard to win, but we’ll see. I’ve also had requests for tails, dragons and cute eyes. The creatures are physics-based, and that is a very demanding thing, especially since computer physics engines have some strange characteristics. The creatures’  limbs have elastic muscles and the weights of different parts of their bodies have an effect on inertia and balance. It’s quite challenging getting one that has a fair chance of learning to walk and doesn’t fall over when it glances sideways! On the upside, real physics allows real intelligence, as well as complex interactions with the world, and their motion can be quite startlingly natural, compared to animation. Animation is cheating.

8. Evolution. Just so’s you know, this is not a game about evolution. The creatures will certainly be able to evolve in a pretty sophisticated way (perhaps even the most sophisticated way ever tried), but in practice it’s not the primary focus of the game. Natural selection is VERY SLOW, and the time it takes is proportional to both the complexity of the creatures that are evolving and their life span. For these creatures to live long enough for you to get to know them and care about them means that they will evolve very slowly – not that many orders of magnitude faster than happens in the real world. Selective breeding will definitely speed this up a lot, so evolutionary changes will doubtless happen. But the most important thing is actually variation – children will inherit characteristics from both parents and so will have their own unique personalities, even if they’re often problematic ones! Evolution is there, but it’s not the point of the game. I just wanted to be sure we’re all clear on that, because most A-life projects are primarily about evolving very simple creatures with very short lifespans.

The other side of life

Yikes! I’ve had 1,000 visitors to my blog today – the busiest day ever. I’d write something really eloquent and interesting about my game, but I’m still overwhelmed by it all and haven’t had a moment to think about it. So I’ll tell you what went on behind the scenes yesterday, while you were all so busy supporting my work. I think you deserve to know exactly what kind of a genius you’re dealing with here.

1. Head for the post office to send a parcel. Insert key in ignition. Turn. Nothing happens. Flat battery.

2. Fear not! I’ve just bought a jump-starter for this very reason. It’s in the trunk. Press trunk release. Nothing happens. Flat battery.

3. Ok, I’ll open it manually. Go to remove key. Ignition switch clings onto it with death-like grip. Flat battery.

4. Oh. I know, I’ll pull down the back seats and climb into the trunk from the front. Where are the seat release toggles? Oh yes, in the trunk. Damn.

5. Maybe I’ll remove the battery and get a charger. Where’s the battery mounted? Ah. I’ll give you one guess.

6. Fish around with a hiking pole until I can reach the jump-starter from between the seats, but it won’t fit through the gap. Extract the cables anyway, noting that they’re 8 feet from the nearest power terminals.

7. Walk into Walmart. Buy booster cables. Return in triumph. Connect them to jump-starter and car. Nothing happens. Voltage drops but nothing else.

8. Return home. Put Air on a G-string on MP3 player. Pour large whisky. Feign a smile.

I fixed it in the end. Turns out there’s a little plunger on the ignition lock that not only releases the key but also resets the anti-theft system that was probably preventing me from starting it. Took me a whole minute. Such is the life of an artificial intelligence researcher – people kept warning me that the machines would take over…