Brainstorm #1

Ok, here goes…

Life has been rather complicated and exhausting lately. Not all of it bad by any means; some of it really good, but still rather all-consuming. Nevertheless, it really is time that I devoted some effort to my work again. So I’ve started work on a new game (hooray! I hear you say ;-)). I have no idea what the game will consist of yet – just as with Creatures I’m going to create life and then let the life-forms tell me what their story is.

I wasted a lot of time writing Sim-biosis and then abandoning it, but I did learn a lot about 3D in the process. This time I’ve decided to swallow my pride and use a commercial 3D engine – Unity. (By the way, I’m writing for desktop environments – I need too much computer power for iPhone, etc.) Unity is the first 3D engine I’ve come across that supports C#.NET (well, Mono) scripting AND is actually finished and working, not to mention has documentation that gives developers some actual clue about the contents of the API. I have to jury-rig it a bit because most games have only trivial scripts and I need to write very complex neural networks and biochemistries, for which a simple script editor is a bit limiting, but the next version has debug support and hopefully will integrate even better with Visual Studio, allowing me to develop complex algorithms without regressing to the technology of the late 1970’s in order to debug them. So far I’m very impressed with Unity and it seems to be capable of at least most of the weird things that a complex Alife sim needs, as compared to running around shooting things, which is what game engines are designed for.

So, I need a new brain. Not me, you understand – I’ll have to muddle along with the one I was born with. I mean I need to invent a new artificial brain architecture (and eventually a biochemistry and genetics). Nothing else out there even begins to do what I want, and anyway, what’s the point of me going to all this effort if I don’t get to invent new things and do some science? It’s bad enough that I’m leaving the 3D front end to someone else.

I’ve decided to stick my neck out and blog about the process of inventing this new architecture. I’ve barely even thought about it yet – I have many useful observations and hypotheses from my work on the Lucy robots but nothing concrete that would guide me to a complete, practical, intelligent brain for a virtual creature. Mostly I just have a lot more understanding of what not to do, and what is wrong with AI in general. So I’m going to start my thoughts almost from scratch and I’m going to do it in public so that you can all laugh at my silly errors, lack of knowledge and embarrassing back-tracking. On the other hand, maybe you’ll enjoy coming along for the ride and I’m sure many of you will have thoughts, observations and arguments to contribute. I’ll try to blog every few days. None of it will be beautifully thought through and edited – I’m going to try to record my stream of consciousness, although obviously I’m talking to you, not to myself, so it will come out a bit more didactic than it is in my head.

So, where do I start? Maybe a good starting point is to ask what a brain is FOR and what it DOES. Surprisingly few researchers ever bother with those questions and it’s a real handicap, even though skipping it is often a convenient way to avoid staring at a blank sheet of paper in rapidly spiraling anguish.

The first thing to say, perhaps, is that brains are for flexing muscles. They also exude chemicals but predominantly they cause muscles to contract. It may seem silly to mention this but it’s surprisingly easy to forget. Muscles are analog, dynamical devices whose properties depend on the physics of the body. In a simulation, practicality overrules authenticity, so if I want my creatures to speak, for example, they’ll have to do so by sending ASCII strings to a speech synthesizer, not by flexing their vocal chords, adjusting their tongue and compressing their lungs. But it’s still important to keep in mind that the currency of brains, as far as their output is concerned, is muscle contraction. It’s the language that brains speak. Any hints I can derive from nature need to be seen in this light.

One consequence of this is that most “decisions” a creature makes are analog; questions of how much to do something, rather than what to do. Even high-level decisions of the kind, “today I will conscientiously avoid doing my laundry”, are more fuzzy and fluid than, say, the literature on action selection networks would have us believe. Where the brain does select actions it seems to do so according to mutual exclusion: I can rub my stomach and pat my head at the same time but I can’t walk in two different directions at once. This doesn’t mean that the rest of my brain is of one mind about things, just that my basal ganglia know not to permit all permutations of desire. An artificial lifeform will have to support multiple goals, simultaneous actions and contingent changes of mind, and my model needs to allow for that. Winner-takes-all networks won’t really cut it.

Muscles tend to be servo-driven. That is, something inputs a desired state of tension or length and then a small reflex arc or more complex circuit tries to minimize the difference between the muscle’s current state and this desired state. This is a two-way process – if the desire changes, the system will adapt to bring the muscle into line; if the world changes (e.g. the cat jumps out of your hands unexpectedly) then the system will still respond to bring things back into line with the unchanged goal. Many of our muscles control posture, and movement is caused by making adjustments to these already dynamic, homeostatic, feedback loops. Since I want my creatures to look and behave realistically, I think I should try to incorporate this dynamism into their own musculature, where possible, as opposed to simply moving joints to a given angle.

But this notion of servoing extends further into the brain, as I tried to explain in my Lucy book. Just about ALL behavior can be thought of as servo action – trying to minimize the differential between a desired state and a present state. “I’m hungry, therefore I’ll phone out for pizza, which will bring my hunger back down to its desired state of zero” is just the topmost level in a consequent flurry of feedback, as phoning out for pizza itself demands controlled arm movements to bring the phone to a desired position, or lift one’s body off the couch, or move a tip towards the delivery man. It’s not only motor actions that can be viewed in this light, either. Where the motor system tries to minimize the difference between an intended state and the present state by causing actions in the world, the sensory system tries to minimize the difference between the present state and the anticipated state, by causing actions in the brain. The brain seems to run a simulation of reality that enables it to predict future states (in a fuzzy and fluid way), and this simulation needs to be kept in train with reality at several contextual levels. It, too, is reminiscent of a battery of linked servomotors, and there’s that bidirectionality again. With my Lucy project I kept seeing parallels here, and I’d like to incorporate some of these ideas into my new creatures.

This brings up the subject of thinking. When I created my Norns I used a stimulus-response approach: they sensed a change in their environment and reacted to it. The vast bulk of connectionist AI takes this approach, but it’s not really very satisfying as a description of animal behavior beyond the sea-slug level. Brains are there to PREDICT THE FUTURE. It takes too long for a heavy animal with long nerve pathways to respond to what’s just happened (“Ooh, maybe I shouldn’t have walked off this cliff”), so we seem to run a simulation of what’s likely to happen next (where “next” implies several timescales at different levels of abstraction). At primitive levels this seems pretty hard-wired and inflexible, but at more abstract levels we seem to predict further into the future when we have the luxury, and make earlier but riskier decisions when time is of the essence, so that means the system is capable of iterating. This is interesting and challenging.

Thinking often (if not always) implies running a simulation of the world forwards in time to see what will happen if… When we make plans we’re extrapolating from some known future towards a more distant and uncertain one in pursuit of a goal. When we’re being inventive we’re simulating potential futures, sometimes involving analogies rather than literal facts, to see what will happen. When we reflect on our past, we run a simulation of what happened, and how it might have been different if we’d made other choices. We have an internal narrative that tracks our present context and tries to stay a little ahead of the game. In the absence of demands, this narrative can flow unhindered and we daydream or become creative. As far as I can see, this ability to construct a narrative and to let it freewheel in the absence of sensory input is a crucial element of consciousness. Without the ability to think, we are not conscious. Whether this ability is enough to constitute conscious awareness all by itself is a sticky problem that I may come back to, but I’d like my new creatures actively to think, not just react.

And talking about analogies brings up categorization and generalization. We classify our world, and we do it in quite sophisticated ways. As a baby we start out with very few categories – perhaps things to cry about and things to grab/suck. And then we learn to divide this space up into finer and finer, more and more conditional categories, each of which provokes finer and finer responses. That metaphor of “dividing up” may be very apposite, because spatial maps of categories would be one way to permit generalization. If we cluster our neural representation of patterns, such that similar patterns lie close to each other, then once we know how to react to (or what to make of) one of those patterns, we can make a statistically reasonable hunch about how to react to a novel but similar pattern, simply by stimulating its neighbors. There are hints that such a process occurs in the brain at several levels, and generalization, along with the ability to predict future consequences, are hallmarks of intelligence.

So there we go. It’s a start. I want to build a creature that can think, by forming a simulation of the world in its head, which it can iterate as far as the current situation permits, and disengage from reality when nothing urgent is going on. I’d like this predictive power to emerge from shorter chains of association, which themselves are mapped upon self-organized categories. I’d like this system to be fuzzy, so that it can generalize from similar experiences and perhaps even form analogies and metaphors that allow it to be inventive, and so that it can see into the future in a statistical way – the most likely future state being the most active, but less likely scenarios being represented too, so that contingencies can be catered for and the Frame Problem goes away (see my discussion of this in the comments section of an article by Peter Hankins). And I’d like to incorporate the notion of multi-level servomechanisms into this, such that the ultimate goals of the creature are fixed (zero hunger, zero fear, perfect temperature, etc.) and the brain is constantly responding homeostatically (and yet predictively and ballistically) in order to reduce the difference between the present state and this desired state (through sequences of actions and other adjustments that are themselves servoing).

Oh, and then there’s a bunch of questions about perception. In my Lucy project I was very interested in, but failed miserably to conquer, the question of sensory invariance (e.g. the ability to recognize a banana from any angle, distance and position, or at least a wide variety of them). Invariance may be bound up with categorization. This is a big but important challenge. However, I may not have to worry about it, because I doubt my creatures are going to see or feel or hear in the natural sense. The available computer power will almost certainly preclude this and I’ll have to cheat with perception, just to make it feasible at all. That’s an issue for another day – how to make virtual sensory information work in a way that is computationally feasible but doesn’t severely limit or artificially aid the creatures.

Oh yes, and it’s got to learn. All this structure has to self-organize in response to experience. The learning must be unsupervised (nothing can tell it what the “right answer” was, for it to compare its progress) and realtime (no separate training sessions, just non-stop experience of and interaction with the world).

Oh man, and I’d like for there to be the ability for simple culture and cooperation to emerge, which implies language and thus the transfer of thoughts, experience and intentions from one creature to another. And what about learning by example? Empathy and theory of mind? The ability to manipulate the environment by building things? OK, STOP! That’s enough to be going on with!

A shopping list is easy. Figuring out how to actually do it is going to be a little trickier. Figuring out how to do it in realtime, when the virtual world contains dozens of creatures and the graphics engine is taking up most of the CPU cycles is not all that much of a picnic either. But heck, computers are a thousand times faster than they were when I invented the Norns. There’s hope!

Advertisements

About stevegrand
I'm an independent AI and artificial life researcher, interested in oodles and oodles of things but especially the brain. And chocolate. I like chocolate too.

44 Responses to Brainstorm #1

  1. bill topp says:

    “I want to build a creature that can think, by forming a simulation of the world in its head, which it can iterate as far as the current situation permits, and disengage from reality when nothing urgent is going on.”

    if one defines the world simulation as 64 squares with 32 citizens and you want your game to iterate far ahead then you’re playing chess. the citizens move like robots, exceptionally limited options, and have no personalities. it took deep blue to do this. looks to me like if you’re going to have more complex behaviour you’ll need far fewer citizens. a powerful pc isn’t even a noticeable fraction of deep blue.

    • stevegrand says:

      Hah! And my chessboard is ten million squares, with thousands of pieces and all the players move at the same time. Methinks a chess algorithm isn’t going to cut it. But that’s no surprise. I think there are fundamental differences. For one thing life isn’t a zero-sum game, for another the world isn’t brittle – similar “plays” tend to result in similar consequences. But the comparison is interesting and I’m going to have to think more about that!

  2. Terren says:

    Haha, awesome. Great write up, and hugely looking forward to your efforts and writings.

    I think the thing I will be most interested in, at this beginning stage, is where you will draw the line between design and emergence. It sounds like you want to go pretty low-level if you’re talking about an artificial bio-chemistry. But you’re also talking about brain architecture which would be several levels higher than that. And even higher than that when you talk about specifying goals like “zero hunger, zero fear, perfect temperature”. I know you’re about as far as one can get from the GOFAI notion of logically specifying goal systems and deriving behavior from that. You’re talking about homeostasis, in which behavior would emerge from homeostatic dynamics, but still to specify a goal at all implies an ontology that is specified, rather than emergent. And that means you have to deal with the grounding problem. I.e. looking at the symbols in which those goals (e.g. no hunger) are specified, how are they grounded, if they do not emerge from experience?

  3. stevegrand says:

    > to specify a goal at all implies an ontology that is specified, rather than emergent

    No, I don’t think so. Hunger itself is a grounded primitive – the brain gets an input value from the biochemistry, and the target value is hard-wired at zero. The difference between the two has to cause behavior that minimizes this difference. WHAT behavior will do this is something that has to be learned, and the overall result is emergent. Nothing in the system has to know what hunger is. All that’s necessary is that it can tell when a value is moving towards a preset goal (in the case of hunger, evolution would presumably have wired up the system such that whatever the brain receives as input is merely the difference from that goal, so when the signal goes away, the servo is satisfied. If the brain is a network of ganged servos (nonlinear servos, that is) then all (!) that each servo has to do is attempt various outputs to see if they result in a reduced differential. No servo needs to know what the others are doing or what their output or input represents – they just have to figure out how to minimize it. This leads to some big issues with how they do that, but as far as your question is concerned, all of this is emergent – the servos act autonomously to minimize differential; some of those servos have hard-wired inputs (like hunger level) while for the majority their input is the output of another servo or group of servos; some connect to the outside world. Each servo has the hard-wired goal of reducing its differential, and there will be a sequence of events that minimizes the differentials overall. If you have my Lucy book, read the bit about my model aircraft and how the servos cooperate (in this case linearly) to satisfy a fixed goal to stay in the air. For the more general case of this project, there are no symbols in the system; the hierarchy is emergent; each element acts autonomously; the grounding comes from the sensory and motor circuitry and thus from the world itself. Any high-level concepts such as “pizza” and “phone out” emerge from the experience of the system as a whole. Am I making sense?

  4. Carl says:

    When you are designing the brain, can you build in a ‘copy’ mechanism for emotional responses – necessary for goal-setting? Human babies ingest large amounts of sensory data and build associations based on internal cues (hunger, pain, sleepiness), while mimicking higher concepts like fear and happiness from parroting the emotional responses from others. Absent these models, infants do not appear to develop appropriate behavioral responses (as observed with neglected Eastern Bloc orphan infants).

    • stevegrand says:

      Hi Carl, that’s a very good question. I hope so but it’s a real challenge. I think it may relate to sensory invariance, in an interesting way. Invariance (recognizing a banana from any angle) can be characterized as a flexible coordinate transform from eye-centered coordinates into banana-centered coordinates (from inside the banana it looks the same from all directions relative to the observer). [I realize this probably doesn’t make much sense – you’d have to read my Lucy book, and even then I didn’t explain it very well]. Anyway, the ability to copy perhaps requires a coordinate transform from the other person’s frame into one’s own (egocentric coordinates). So if we see someone make a particular movement, we transform it until it’s as if we’ve made the movement ourselves. There’s evidence that this may be what actually happens (my interpretation of “mirror neurons”). But how the heck I do this in my creatures is anybody’s guess! I think maybe I should do a blog post on this later. (Incidentally, babies seem to be able to do some of this in a hard-wired way, so there may be multiple mechanisms). I’ll add it to my feature list!!!

  5. Marc A. Murison says:

    I wouldn’t worry much about not inventing *everything*. The 3D front end is just a piece of (useful, one hopes) machinery that is at best ancillary to your goals. Stay on target!

    • stevegrand says:

      Thanks Marc, good advice. For the game I was writing before, there were reasons why I did all the 3D code myself, but this time that’s not true so what you say makes sense. I probably have enough things to worry about already…

  6. Ryan Olsen says:

    You mentioned wasting time on Sim-Biosis, looks like it was a biology sim, why do you say “wasted”?

    To what extent do you plan to evolve vs design in this current project?

    • stevegrand says:

      Hi Ryan, Sim-biosis was a simulation of cell-like organs – a kind of LEGO set for building creatures. It’s an idea I’ve wanted to implement since 1979. It was coming along great, on the whole. In fact I was almost at Alpha. But I kind of lost faith in it. Building biology by hand is a pretty complex thing, even though I tried to make my LEGO bricks as elegant and simple as I could, and I felt it was too specialized for a mass-market product. The work involved in bringing it to market was a major risk against the number of likely sales. Plus there were things going on in my personal life that left it with a bit of a nasty taste, plus I went off to do some robotics work on another project. Eventually I just didn’t feel committed enough to finish it. Maybe another day!

      As for evolution I’ll take the same stance as I did in Creatures: their internal and external structure will be defined by genes (only this time more sophisticated genes, with more power over the phenotype) but the initial genotypes will be hand-assembled. Evolution will be entirely possible, and in more open-ended ways than with Creatures, and VARIATION will be inevitable, as it was with Creatures. But evolution is VERY slow in a simulation where the creatures live moderately long lives and the “fitness test” involves lengthy and complex interactions with the world, including learning. I don’t think evolution can be the focus or the primary mechanism in such a system – nobody is going to wait a century for something workable to arise. But if I do the initial “intelligent design” work myself (doubtless causing a debate from creationists!) then evolution can take it from there, especially with the help of artificial selection by the users, which turned out to be a powerful force in Creatures. My main interest is AI and the genetics is there because genes are a fantastic programming language for defining brains and computational chemistry. Hope that answers your question.

  7. Daniel Mewes says:

    Hi Steve,
    so great to hear you are working on the game thing again.
    These are extremely ambitious goals you are setting yourself here. I am quite sure that there are not many people on the world who could be able to accomplish those in the limited time scale that is available for bringing a game product to the market, but you certainly are on of them.

    Hmm, it seems like you are listing being fear-less as a primary goal. I personally feel like fear is really a high-level thing. Well, maybe not exactly, since there may also be a primary kind of fear, e.g. if you see something either very huge or very fast moving into your direction. But most of the fear is about fear of expected loss of another goal, isn’t it? At some time I had put some thought into how to generate fear (for an artificial creature, not in the real world πŸ˜‰ ) and I found out that it was not trivial at all. The question in general also is what consequence a drive like fear or pain should have at all. Well in Creatures it makes a Norn *not* do the thing it just did. You certainly know as well as me that this does not exactly work, does it (well it did for Creatures, but – I assume – not for a sophisticated AI)? So what should a drive do to the neural machinery? Primarily, it should do to things as far as I have figured out so far:
    1. Make the thing try to *change* its situation in any way, id est just do anything and do it fast! (in some biological animals there also is the reaction of stopping doing anything in order to pretend to be dead or curling up or something, but that’s probably just a feasible solutions for simple life forms).
    2. Remember what you are doing! Emotions and drives should give a big boost to memory (in NeuroLogics I have incorporated methods of doing this efficiently and in a context-preserving way mostly for this reason)! The next time you are getting into this position you might not get the chance to first try out different alternatives.

    I have currently started writing about my NeuroMind technology project again (see http://www.neuromind-technology.com/?content=news), but I am still at a very basic level trying to get something useful out of the neural networks I am simulating (and maybe the way those networks are working and/or their dynamics are rubbish, it’s just that I somehow belief that they are doing things right currently). There is always so much else for me to do, that I don’t think I’ll be able to get anything useful out of that project for the next ten years or so (but maybe earlier, or never).
    Steve, I also mention your name on one of the pages: http://www.neuromind-technology.com/?content=concepts
    If you feel uncomfortable about being mentioned there – like you are saying “What crap is he writing there? I don’t want to get associated with that stuff!” or something – please let me know so I can remove that reference or make it more clear that you are not affiliated with those concepts in any direct way.

    Good luck for your game! It is so great to see you writing about artificial life again. I also think that it is about time for AI to make a step forward. Good that you are there to make that real πŸ™‚

    Best wishes,
    Daniel

    • stevegrand says:

      Thanks for the encouragement, Daniel! I’m glad Neuromind is still active – I look forward to seeing the next animation. I think the mechanism I’ll have to use for this project will end up being much more abstract than yours – I can’t afford that many synapses! Thanks for the link – of course I don’t mind. Good luck with it all.

      As for fear, that’s an interesting point. Emergent emotions and high-level emotional responses are interesting. It’s something I hope will begin to make more sense as I develop this project, and you’re right that I need to keep it in mind. It also raises the question of how much genetic influence there is in the brain – how many special cases are wired into its structure that tend it towards certain complex reactions. Fear is pretty primal, I think: an awful lot of things happen as a result of adrenaline and other hormones, and many things can cause an adrenal response via the hypothalamus. Some of these are undoubtedly learned, as in shyness and stage fright; some are pathological but clearly genetically influenced, as in phobias; some are quite simple in cause but complex in effect. I agree that fear should heighten learning (including to the extent that it can over-learn things and lead to pathologies like PTSD). In Creatures the learning rate was controlled by the rate of change of drives, so fear did increase retention, but it was just one of several drives and had no special status. This time I’ll be able to produce a more complex endocrine response, which I hope will be able to make multiple changes in the brain. But until I have a basic neural architecture I can only guess what those will be. I’ll add it to my list of things to think about! Thanks.

  8. torea says:

    It is quite an ambitious project. I hope to see many interesting things going out of it even if not all goals are met!

    As Marc A. Murison said, the 3D engine is not that much an important piece. You could then go for some open source 3D engine like Ogre (http://www.ogre3d.org/).
    On the other hand you may need to be careful about the physics engine. If you go down to the level of muscles actuation and realistic interaction with the environment, you may require very fine handling of collisions, deformable bodies and deformable environments.
    That’s probably what will take most of the CPU (preferably GPU) and a bit of memory.

    On another note, I tend to think that perception is an important part of a cognitive system especially because it is imperfect.
    When we perceive only small parts of an object or an event, we have to deduce what can be the parts that are not seen according to our current knowledge, and then we define some actions to correct our assumptions and complete our perception.
    In the first stages of perception, there is an ambiguity about the nature of the object that can create links with different objects because they share a specific color, shape or texture.
    This could be an important aspect of learning.

    Are there some works out there in the scientific world you see as good starting basis for your project?
    Parts of your project seem to be related to works in reinforcement learning based on recurrent neural networks with bayesian statistics.

    • stevegrand says:

      Thanks torea. Yes, “ambitious” is my middle name! Which is why I’m always failing at things πŸ˜‰ I always try to aim high and have a number of fall-back positions. People may be interested to know that when I started writing Creatures I decided to develop a neural network for their brains primarily to add some psychologically plausible randomness to their behavior. I didn’t really expect to make creatures that could learn and control their behavior entirely through neural interactions. But I aimed high and achieved at least half of what I hoped, knowing that if I failed, the worst that would happen was what I actually intended – a bit of randomness and variation.

      I looked at OGRE and at least a dozen other 3D engines but settled firmly on Unity. OGRE is still a bit of a mess – a typical open source project. I want to use a managed language (I don’t want to go back to C++ again – it’s like handwriting instead of typing – and a modern language makes the biology SO much easier and more bug-free), and OGRE has the MOGRE variant, which supports C#, but, last I saw, it was barely being developed any more. I can’t afford to faff around while people finish the 3D engine in their spare time – I need a proper commercial product with a strong development team and a good likelihood of staying in business. Hopefully I’ve made a good choice.

      Unity has a full implementation of Physx, well-integrated into both the API and the visual editor. I’ve yet to experiment to see what I can achieve in that direction. We’ll see, I guess!

      I agree about perception. There are severe practical constraints for this game and it’s a big question how best to tackle it. I’ll blog about it.

      > Are there some works out there in the scientific world you see as good starting basis for your project?

      Nope. But it’s just not the way I work, anyway. I’m well-connected with the academic field but I prefer to think things through from first principles, rather than try to adapt somebody else’s ideas. There are issues with reinforcement learning. I certainly need probablistic qualities but I’d prefer these to emerge from the biology, rather than come at them from an abstract mathematical direction like Bayesian networks. I’m doing this project primarily because understanding intelligence and the brain are what drive me, not because I want to make a game. I need the money, but I’m doing this out of love (or else I’d get a proper job!). So I’m not really motivated to use other people’s work – I’m here to do my own independent work. It’ll kill me, I’ve no doubt, but it’s just the way I am! πŸ˜‰

  9. Jason Holm says:

    I worry that you might be trying to do too much at once, and that scattered list of goals will throw you off track. I think that bringing them all together will pay off in the end, and I think that PLANNING for the eventual opportunities will keep the code open and easy to add to, but…

    Well, for example — the Norns are supposed to be somewhat intelligent creatures, right? Well, sometimes I had Norns who could learn English, but failed to ever learn how to feed themselves. I had Norns capable of taking care of themselves and producing many offspring, but would abandon the offspring in some place devoid of food where they had no chance. There were all these great systems, but they didn’t seem to build on each other.

    It’s like evolution — any time you find a new species in the fossil record that can be identified fairly well (especially anything tht wasn’t a dead-end), you are able to predict how it behaved based on what came before it (“since it evolved from these, it probably still had nearly all their same behaviors”) and what would come next (“and since it was on the road to these, it probably was just beginning to develop a primitive XYZ”).

    AI and Alife folks seem to cluster around human intelligence or cellular biology, and you are one of the few that seem to get how the two are connected, but I think it’s a monumental task.

    My suggestion would be to take many steps down from Lucy and the Norns, but not all the way to Sim-biosis either. Pick some kind of lower critter (fictional is fine) and make THAT work – a rodent, a lizard, an amphibian. Something a step UP from Framsticks and Karl Sims, something more functional than Spore.

    My first thought would be to set the world right after the K-T Extinction (or an alien equivalent). Small, higher functioning critters (birds, mammals, etc), plenty of niches now open, a landscape cleared away and starting anew.

    My vote? Simulate a Megazostrodon or something equivalent. Get that running to where you’re happy with it, and you’ve got the mammalian precursors to go prairie dog, Norn, Lucy or Human, depending on how far you want to go with it.

    • stevegrand says:

      Hey Jason, you’re not the first to suggest a palaeontological focus, although for different reasons. I don’t think I have a scattered list of goals – they’re just aspects of the same thing. It’s a bit like saying I want to do a geological simulation and it must support anticlines, synclines, monoclines, faulting, bedding, etc. That’s a long list, but all I have to do is simulate rock, and all those things will come to pass. The list of things in my first post are just aspects that I think describe a single, coherent mechanism. What that mechanism actually IS, is the task that lies before me. But these are not separate modules or features, just symptoms of the whole.

      Norns were a bit brittle, as you say, and I take your point on this. This time the brain will be more hierarchical, as if it had evolved. But I’m just not interested in simulating very simple creatures and I don’t think there’s a market for it – people want things they can relate to. Amphibians are too dull and lizards aren’t very interesting either. Rodents, on the other hand are incredibly intelligent creatures – vastly more intelligent than current behavioral psychology likes to pretend. If I can simulate a rodent I’ll be very happy! My research interest is in the mammalian brain – not humans but not reptiles either. There’s not enough work being done in this middle ground and I’d like to focus on this. Norns were basically reptiles with cuddly looks and a bit of language artificially bolted on. This time I hope to make a creature with elementary cognition – better than a norn but way below human. Taking a geological period might be an interesting gameplay decision – thanks to Carl for first suggesting that – but dinosaurs, birds and early mammals were already pretty smart creatures.

      • Jason Holm says:

        I think you’ve got it there – the end-release product WOULD be more like Creatures (but better), and to make it work the hierarchical brain evolution is the key. Scattered was probably the wrong phrase… disordered might be closer, trying to do all the parts at once. Like the geological simulator example, I’d certainly write code leaving the door open to erosion, but it seems safer to start with the assumption that our simulated world has no air, water, or molten core. Once that system is working, then add in the concept of the core. Then add air, then water, and so on, until it does become something ready for release.

        I think starting with trying to program a reptilian brain to the point where you are successful, THEN adding the limbic, and once all this is working, THEN add the neocortex, and THEN add social structures, etc. By scattered, I guess I meant it sounds like you want to try to do all of them at the same time, rather than in steps.

        Yes, amphibians are boring, but I guess if it were me, I’d make sure I have a close-to-100% functional reptilian brain simulated creature, with an environment and all that, BEFORE I even wrote a single line of limbic code.

        Had the Norns wielded a fool-proof reptilian brain, a half-formed limbic brain, and a thin layer of neocortex, they would have been much more believable. As it stands now, most video game NPCs are written in reverse, and players can quickly break the system once they learn how shallow the intelligence is. A komodo dragon may not be able to open a door or plan an ambush, but the King’s elite guard ought to be smart enough to DODGE ARROWS from a non-moving target.

        I think all your ideas are great, I’m just hoping we see Alpha 1.0: Flatworm/Fish 2.0: Fish/Lizard, 3.0: Lizard/Rat 4.0: Sapient-Primate-thing; rather than “here’s a creature that can identify 50 types of fruit and knows which ones it likes best, and what season they are all best in, but is bolted to the ground and can’t go after them, nor will it eat any of them even when starving to death.”

        I’d like to see the code evolve along hierarchy along with the creatures is what I’m saying.

      • stevegrand says:

        Yes, I see your point. How about a compromise? One thing that concerns me about such an approach is that it assumes the higher architecture will work with the lower. In real evolution this happened, but as a designer I’d like to be sure that the higher levels are thought through far enough to be sure the lower levels will support them. This supervention is really important and I’ll do my best to make it so that the lower levels of the more intelligent creatures are complete working brains in their own right, over which the higher levels then supervene. I’ll add that to my wishlist. And I fully intend to create multiple species this time, so this would mean I could make the simpler creatures using the lower parts of the system and add the higher parts for the more complex creatures. But I don’t want to write the low-level systems and THEN think about the higher levels, because a) I think I’d be in danger of digging myself into a hole, and b) the overall environment, gameplay and scenario needs to be focused around what the more advanced creatures can do.

        Oh, and I apologize for any inadequacies in Creatures – it’s getting towards 20 years since I started work on that and my thoughts have moved on a bit since then! πŸ˜‰

  10. Jason Holm says:

    Ooh, just started reading this one, but I like where it’s going:

    “In order to design a robot controller that will generate animal-like behavior, it is instructive to examine nature’s earliest brain designs. One of the first phylogenic examples of encephalization (the formation of one central brain) is found in the notoplana acticola flatworm. Using roughly 2000 neurons, this creature is able to perform a variety of behaviors aimed at survival: keeping itself upright, walking, avoiding predators, and eating.”

    http://groups.csail.mit.edu/lbr/syntheticbrains/

    • stevegrand says:

      Oh, but I’m not starting THAT far back!!! If you want to simulate flatworms then be my guest. But just think about the EMBRYOLOGICAL problems involved in creating a simulation that can support a Planarian AND a mammal using the same architecture, such that the latter can be derived from the former! Planarians show some encephalization, sure, but they’re a very long way in phenotypical terms from something with a complex brainstem, a highly differentiated thalamus and a cortex, not to mention something with four legs and a head!

  11. BigEd says:

    Glad to see you embarking on a new journey. Loved your books – bought them secondhand, so I owe you a tenner.

  12. Jason Holm says:

    As long as I get a game before I die where I make a 3D object (tree branch, tall grass blades, flint, deer horn), give it physical and chemical properties (density, fracture layers, flexibility, edge angles and their effects on flesh), drop it in the middle of a group of intelligent creatures, and see what they do with it (WITHOUT telling them how to make a spear beforehand), I’m supportive of any game that is a step along the way.

    I want Theodore Sturgeon’s Neoterics, dangit! πŸ™‚

  13. Ben says:

    Hi Steve – I’ll have a much longer response soon, but first, I’d like to take credit for this resumption of posting, since I told you to do so just a few days ago, and second, I’m really intrigued and excited by the direction of this project. I’ve got a number of thoughts on specific points you raised, but I’d like not to be redundant with any of the other comments, and I’ve run out of time today…

  14. Ryan Olsen says:

    On evolution vs design question: I was thinking of how you would arrive at a brain (assuming neural network for at least part of it) to perform these functions. In other words, which approach will you take to arrive at the end product, not how would a consumer use it.

    Also, I’m glad to see you embarking on this project, I don’t find too many active alife/ai sites/projects these days, most seem to have stopped further development a few years ago. I’ve been working on a 2D project for a few years with similar but less ambitious goals, more along the lines of what Jason is getting at, but really even less ambitious than that. Basically just trying to create 2D creatures that will be able to dynamically adapt to a new environment, along the lines of your modeling/planning, without hard coding anything but a measure of survival. So far it’s all been evolution and straight forward reaction type brains, no planning, but that was the next step I was working on when I read your blog the other day.

    • stevegrand says:

      Good luck with the project, Ryan. I think evolution and intelligence are related things and it’s important to consider them together, but there’s SUCH a huge timescale difference. If you’re interested in evolution then you’re not going to be able to expect much in the way of intelligence, and vice-versa. I think a lot of researchers allow themselves to be blinded to this (I call it exponent blindness – the tendency to treat 10^25 as if it were “a little bit bigger than” 10^20, instead of a hundred thousand times bigger). It’s possible in PRINCIPLE to expect sophisticated intelligence to emerge through artificial evolution (assuming we know enough about embryology – the majority of attempts so far have been quite lacking in this regard and have little hope of succeeding). But in practice I’m not willing to wait around for a few million years, at best, for it to happen. So evolution is a fascinating Alife topic, and evolved neural controllers are a great subject for both research and practical applications, but if you want to understand mammal-like intelligence, as I do, then you’re going to have to engineer it by hand. So that’s where my own focus lies. It’s still artificial life, because of the philosophical approach – Alife doesn’t have to be about evolution. There’s room for both.

      Genetics, on the other hand, and the embryology that makes genetics work, are important design tools, I think. Brains self-organize, and learning is all but indistinguishable from development. Genes are what make this happen and they offer some clues as to an appropriate “programming language” for making brains, even if you don’t expect those brains to evolve much by themselves. I think that’s worth a post some time.

      Hope that makes sense and answers your question.

  15. Jason Holm says:

    “I’m going to create life and then let the life-forms tell me what their story is.”

    “if I do the initial ‘intelligent design’ work myself… then evolution can take it from there”

    “the overall environment, gameplay and scenario needs to be focused around what the more advanced creatures can do”

    “My main interest is AI and the genetics is there because genes are a fantastic programming language for defining brains and computational chemistry.”

    I’m curious what you will consider as an acceptable degree of… “failure”. It’s becoming clearer you definitely want an intelligent creature, and rather than an evolutionary background supporting it, you’ll be designing that background and then turning it on to go an live as it best sees fit.

    If we rewind our evolution, each step (both biologically and environmentally) becomes a major factor into how we understand human intelligence – social structures, scavenger diet, emotional faces, etc. How much of your pre-design will mirror human evolution, and how much will be “alien”?

    By failure, I’m thinking of things like dolphins and octopus intelligence – they are higher functioning, but are we even capable of measuring something so divergent from us?

    You mention that your main goal is AI, but is that human-like AI or what? I think the more “story” you write for the creatures, the less dynamic their own “stories” will be. This isn’t necessarily a bad thing, I’m just curious how much familiarity will be assumed, and how much will be left to develop on its own.

    Too much pre-assumption and it just becomes a human brain simulator. Too little and we end up with “Starfish Aliens”, which may or may not be what you’re going for.

    http://tvtropes.org/pmwiki/pmwiki.php/Main/StarfishAliens

    • stevegrand says:

      Whaddaya mean, JUST becomes a human brain simulator??? Are they that common? πŸ˜‰
      Not human intelligence, certainly. Mammal-like is the closest I can get, but mammals are a pretty broad class. I’m interested in cognition – actively thinking, as opposed to reacting. My main research interest is in mental imagery or mentation – how does the brain (leaving aside WHICH brains I’m referring to) create a dynamic mental model of the world? It is this model that we are conscious WITHIN – we’re not really conscious of external reality, only our model of it. What are the properties of such a model that allow subjective consciousness to emerge? The existence of this model seems to me to be the fundamental clever trick that makes “higher” intelligence possible. I think this system is more highly developed in humans but is present to some degree in dogs, cats, rats, birds (so not just mammals). I don’t see it in any sufficient degree in fish or insects. What the different elements of mentation are and which species have which is a tricky problem, but I think there’s a qualitiative difference between the way a cat thinks and the way a fish does.

      “Alien” may be apposite, because the game scenario I’m most actively considering is to create a region of a planet, filled with alien beings for the user to discover, study and try to communicate and interact with. That scenario at least partly satisfies your earlier comments, since it involves creating a wide variety of species and types of intelligence (each derived from a plausible evolutionary tree).

      • Jason Holm says:

        Hehe, I guess what I was getting at was something slightly like the anthropic principle. Since we are going off a single data point for brain development due to a common evolution, there’s always the possibility that a universe teeming with life — even intelligent life — is nothing like us when it comes to brains. Cognition could be a fluke — other intelligent species might NOT form a mental model of the world and yet still be perfectly capable of forming civilizations. Hive minds might be the norm — intelligent beings capable of producing mental models of the world, yet lacking the ability to perceive of themselves as individuals, and still achieving civilization.

        I’m not suggesting you believe every intelligent race in the universe is capable of the same mentation that humans are (though you might be), nor that evolution inherently dictates such mentation to be a requirement of intelligence.

        I’m just wanting to clarify that your perspective for this game is “the dominant race of this game is capable of forming mental models of the world around them due to the genetics I give them. I’ve nudged them along a similar path that humans went down simply because humans will be PLAYING this game, and straying TOO far from that basic design will simply make the game too inaccessible to all but the most die-hard sci-fi nuts.”

        Nothing wrong with that — we simply accept that’s the rules of the simulation and step forward. Rather than experimenting with all the possibilities the universe might hold for intelligence, we pick the one we’re most familiar with, reduce it to a simpler form (general mammalian), and then set the ball rolling, to see if we end up with something similar to prairie dog colonies, chimp clans, pair bonded hunters or tool making humans. What they look like may or may not be alien, but genetically, we shouldn’t expect them to stray too far from what we would find in the basic Earth mammalian brain function, correct?

      • stevegrand says:

        I’d probably risk arguing that intelligence is fairly convergent and little green men from Alpha Centauri probably don’t substantially exceed the gamut provided by Earth’s species. Maybe. At least I’d argue that the existence of a mental model is a prerequisite for civilisation and complex social behavior (where complex means highly conditional – ants are socially complex but not in a very flexible way).

        But that’s all getting beyond what I can afford to think about right now! Let me see if I can come up with a new working brain design first and we’ll get into the philosophy of it later! πŸ™‚

      • Jason Holm says:

        Example — I look at the Norns, and I see a lot of things that might have been chosen for aesthetic reasons, but as a fan of biology give me a lot of preconceived notions:

        Dual-gendered. Presumed chordate with central nervous system and endoskeleton with vertebrae. Obvious cephalization. Tetrapodal limb morphology. Thermal homeostasis (fur). Already they share a near identical evolutionary line with Earth mammals.

        Oviparous. Precocial. No observed mammalian nursing. If the Shee copied Earth lifeforms to make the Norns, it would seem they took something like a Megazostrodon to work from, since the Norns are even pre-monotreme. The fact that a group of newborn Norns could survive in a well stocked room without any adult intervention displays a serious break from Mammalian brain development — Norns can care for young and teach them, but weaning and time-binding don’t seem to be a requirement. Already, it seems that how they perceive the world should be vastly alien from ourselves.

        Forward facing eyes for binocular 3D vision (most likely color). Opposable thumbs. Oviparious primate-like development tells us they’re tree dwellers, either by direct design or due to parallel evolution.

        Speech. Semi-flat facial expressions. Enlarged heads. These are obviously social creatures, although their behaviors almost seem random. I’d be interested to see how and why an oviparous, precocial species developed such advanced social characteristics. Not that it’s impossible, it just seems atypical. Maybe that’s why they abandoned their kids half the time and beat up on one another right after kissing them. πŸ˜‰

        Bipedal walking — even from birth. I know in primates this was due to the reduction of the Savannah, I don’t remember what set it off in Sauropsids or kangaroos. Heck, maybe the Norns were bipedal BEFORE becoming arboreal, who knows.

        My point is, it doesn’t really matter whether the Norns evolved or were formed whole hog by the Shee based on their own evolution, something they copied from Earth, or something they made up in their heads. All their characteristics imply a set of behaviors that either affirm or contradict a brain which functions like ours.

        Whatever direction you go with this game, I just want to be able to say “well of course they’re trying to hide their food source — they’re not the dominant male and they know it, and they don’t want to hand it over to their superior” and know their evolution had something to do with why they made that set of choices.

      • Jason Holm says:

        “Let me see if I can come up with a new working brain design first”

        I think I’m just hung up on making a body and seeing what kind of brain sprouts out of it, while I’m guessing your intention is to build a brain, THEN stick it in a body, and let the two find a way to work together. Something like “I keep getting new data coming in from these ports. I guess I should save some of it for later. Oh, I can send data out to these other ports, too. Wow, when I do, sometimes the data coming in changes. Let’s start figuring out what it all means!”

      • stevegrand says:

        I can’t claim any responsibility for how Norns look. I originally made them look like chickens, with ankles as their main leg joint instead of knees and a pretty spunky sort of character. But I was overruled and the Disneyesque look was born. We went through several iterations but none of it had much to do with biology.

  16. Parmeisan says:

    Hi! I just discovered your Grandroids project (and am simply amazed that I have never heard of Creatures, or at least never really looked into it or knew it had anything to do with AI – I’m going to go home and buy it and play it and love it and call it George… but anyway). I do have two comments related specifically to this post, but first – is there any way now that the Kickstarter project is done to get in on this? Would it be unfair to the current contributors to allow newcomers to get onto your forums and/or be a playtester, for the same or even for more money?

    All right, my comments about this brainstorming:

    >> “the ultimate goals of the creature are fixed (zero hunger, zero fear, perfect temperature, etc.)”

    Are they? I don’t know a lot about artificial intelligence theory and I’m a newcomer to your theories, but this doesn’t seem right to me. I had a discussion about it with a friend who’s really into AI and the nature of human “intelligence” one time, and we couldn’t agree on what a person’s “ultimate goal” is. The one you’re postulating is basically comfort, but when a person has achieved comfort, they choose a new goal. (Or was there simply a different, overall goal which necessitated comfort first?) And when two different people have both successfully achieved the comfort goal, they often go in completely different directions. Is there a single human goal that people just interpret differently because of their pasts? Or do genetics (or something else) cause people to have different ultimate goals? Either way, one person (you, perhaps, and most scientists) might ultimately want Knowledge while another person (a politician, a terrorist) wants Control. Maybe these lead from Comfort (I will feel good when I know or control things), or maybe Comfort is a basic instance of both of these (I can’t know or control things until I am comfortable) but I think it bears thinking on.

    >> “The ability to manipulate the environment by building things?”

    I’m curious how complicated you’re going to make the environment. If you are capable of creating a vast and comprehensive environment with lots of hidden factors and so forth, we might be able to witness your Grandroids do things like discover their own impossible-to-guess-equivalents of tools, fire, electricity, gas, solar energy, computers, software, AI, etc… can you imagine?! I am picturing a minecraft-like sandbox world combined with your “game” and if my brain had a jaw it would be on the floor! And I’m also curious whether they going to be able to communicate across people’s versions of the games (maybe once they discover how)?

    Anyway, I just wanted to say, when I first read your Kickstarter description I was thinking, “This isn’t possible. What’s he playing at?” But now I see that you are totally and completely for-real, pioneering in a field I previously thought impossible (maybe I don’t know a lot about AI theory but I know about computer programming and until I read your stuff I could not imagine a way that actual intelligence could be programmed). I am very excited to see where this goes.

    • stevegrand says:

      Haha! Yes, I’m for real. Doesn’t mean I’m not a complete kook, but I guess I’m at least a qualified kook!

      I agree with what you say about higher drives. I admit I said they were fixed in order to emphasize a point: What I meant was that these upper-level goals are fixed by comparison to the goals lower down the chain, which are very fluid. The system is forever fidgeting and adjusting its short-term goals in an attempt to bring reality into line with an ideal desired state. We permanently (well, relatively permanently) want to be well fed, warm, sheltered and safe, but these fixed desires interact with changes to our environment in such a way as to destabilize things lower down in a homeostatic fashion. We don’t ordinarily want to waste energy picking up a phone, but if we’re hungry we might pick one up in order to phone out for pizza. The top-level goals act as (relatively) fixed set-points, and a network of servomechanisms below this have their own set-points changed in order to account for the fact that the high-level desired state is rarely the actual state. In a servo, the whole point is to bring the desired state and the actual state into line with each other, regardless of whether it’s the desire or the actual that has changed. By fixing the top level desires in a changing environment, there will always be a difference between the desire and actual states lower down, and these servos will each act in such a way as to bring their own two states into line by altering the environment. Each servomechanism learns how to alter the desired states of lower servos in order to achieve parity in its own states. If you see what I mean. I tried to explain this a bit better in my Lucy book, but it deserves a much longer explanation some time, because there’s a lot of stuff involved.

      Anyway, I agree that once we find ourselves comfortable we tend to destabilize things again. Some of these top-level goals and drives are mutually incompatible. Boredom, for instance, militates against being comfortable, perhaps partly because when we’re comfortable we don’t learn much about how to regain comfort if things change, and partly because we may only be on a comfortable plateau, with a higher mountain of comfort tantalizingly close. And we certainly differ in the ways we try to achieve security – psychopaths, narcissists and people with social anxiety have very different needs, even though they are all trying to achieve much the same state of comfort and satisfaction. So, yes, good point!

      The environment will be fairly complex, eventually at least. But I’m not holding my breath when it comes to the creatures inventing tools and suchlike – after all, chimpanzees are almost as complex and sophisticated as humans and they’re only just starting to get the hang of invention and problem-solving! But we’ll see!

      Not sure how I’ll manage inter-game travel yet. It certainly won’t be a server-based game and certainly people will be able to swap creatures for breeding, etc. but between those extremes I’m not sure what will be feasible. It’s an ongoing experiment, so the first iteration will have limits that I hope to surpass later.

      I’m not actively broadcasting it, but yes, when I come across people who genuinely wish they’d seen the kickstarter thing in time, I’m telling them that I’m happy to accept donations via PayPal on the same basis as if they’d pledged via kickstarter. So if you really would like to do that (and I’m not trying to call your bluff, here!), PayPal a donation to steve at cyberlife hyphen research dot com and let me know your address and a username for the website, and I’ll set you up an account. Thank you in advance! And thanks for the kind words, too ! πŸ™‚

  17. enapos says:

    Hey today i just started reading your book ‘creation’ !
    although i only read the first chapter and so did not yet read the technical stuff..
    I was wondering what u think about Steven Thaler’s creativity machine.
    it was his implementation that eventually lead me to your books.

    I’m a musician and i’m dreaming about independent intelligent musical entity’s..

    still lot to learn but already a BIG fan !
    -enapos

    • stevegrand says:

      Thanks! I have to admit I’ve paid Stephen Thaler’s work just about zero attention. I don’t agree with a lot of what he says, but there’s room for all of us. None of us know what we’re talking about – not him, not me, not anyone – we’re all just doing our best to figure out the answer to the most complex of mysteries. I think his claims are pretty over the top, though.

      Music is a great source of insight into what brains are for and what they do – or rather, it demonstrates how many of our theories make no sense. The field could probably use a few more musicians!

  18. PhoenixRebirth says:

    Out of all the things that make us sentient, I believe that we are differnt from others because we ask the question “Why?” Alll our scince is based off whys. Conflict rushs the resaearch of newer sciences, but even withought conflict, we are always finding easier, more effeicient ways of doing things. If you truly wish to create a sentient being, you must give them understanding.

    • stevegrand says:

      Yes, I think that’s a good point, although I’ve seen dogs and cats look puzzled when things didn’t happen the way they expected them to, and in a way that’s kind of like asking why. I agree that understanding is key – it’s very different from just knowing. Rats can understand more than most people realize, and chimpanzees certainly can, but we’re probably the most understanding of species (even if we don’t always show it!) You’re right – I’m very interested in what understanding is. Most of the answer is still a mystery to me, but I’m currently working on what it is that allows brains to imagine things, and I think that’s an important part of how we understand and are able to wonder why. Thanks for the comments.

  19. I don’t know if you read comments on 3-year-old posts, but on the off chance:

    I just stumbled onto your work doing research for a “deskpet” robot I’ve been working on for many years and am exceedingly happy that my thoughts so far seem to be roughly similar to yours, since you are obviously an (possibly THE) authority on the subject.

    Not that I think you have a lot of spare reading time, but you might find a useful nugget or two in my ramblings:
    https://docs.google.com/document/pub?id=1EhNtQPCn0M4jGueseuvHR0DMIo4nUFPI3hlmjQwUDSs

    As a roboticist from a microcontroller background, I’ve come at it from the direction of “Here are my inputs and outputs. What sequence of outputs eventually results in a specific set of inputs?”. In order to cheat and “tell” the robot that there’s a banana on the table I’d have to design and install a banana sensor. Therefore I don’t intend to ever recognize bananas, maybe just #EC3F57. But my research seems to have led to the same basic theories.

    I am currently reading through all of your Brainstorms and have your books on order, but one comment on this post:

    >And what about learning by example?

    If you can implement analogies (X behaves like Y), this comes almost for free (He behaves like Me. He did X, Y happened, so if I do X, Y should also happen also).

    • stevegrand says:

      Heh, thanks for the citation!

      That looks like a pragmatic and very sensible architecture to me, given that it has to run on an MCU. And it’s flexible enough to expand as your confidence in it grows (like bringing in the learning aspects of emotion as well as the expressive ones). Good luck with it!

      Analogies interest me a lot. The difficult part, I think, is finding a representation of knowledge for which “X behaves like Y” can emerge in sufficiently potent ways. “Character is like a tree and reputation like its shadow”, “My love is like a star”, “Trust is like a paper”, “People are like sheep” – these analogies are pretty subtle things. People aren’t like sheep because they have four legs and are covered in wool. Finding a representation in which the movements of certain quadrupeds can be seen as having a useful parallel in the political decisions of the human masses has some interesting challenges. And more directly, seeing the image of someone some distance away placing what looks from here like his arm into what looks like a flame has to be perceived as if WE were putting our arm into a flame. We have to relate the egocentric action to the egocentric consequences, merely by watching the allocentric consequences of an allocentric action. Yet the sensation of moving your own arm or even seeing it move bears little to no resemblance to the sensation of seeing someone else’s arm move. As a general idea it’s pretty easy, but as you know well, when you actually try to implement things the devil is in the detail. At the trivial level we could use thought transfer – robot X could broadcast his actions from an egocentric perspective, and robot Y could learn from them as if it had experienced them. But that’s avoiding the real problem, and inside the real problem lie some profound insights about intelligence. I just don’t know what they are… πŸ˜‰

      Good luck with the project. Let me know how it progresses. Hope you enjoy my books!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: