Introduction to an artificial mind

I don’t want to get technical right now, but I thought I’d write a little introduction to what I’m actually trying to do in my Grandroids project. Or perhaps what I’m not trying to do. For instance, a few people have asked me whether I’ll be using neural networks, and yes, I will be, but very probably not of the kind you’re expecting.

When I wrote Creatures I had to solve some fairly tricky problems that few people had thought much about before. Neural networks have been around for a long time, but they’re generally used in very stylized contexts, to recognize and classify patterns. Trying to create a creature that can interact with the world in real-time and in a natural way is a very different matter. For example, a number of researchers have used what are called randomly recurrent networks to evolve simple creatures that can live in specialized environments, but mine was a rather different problem. I wanted people to care about their norns and have some fun interacting with them. I didn’t expect people to sit around passively watching hundreds of successive generations of norns blundering around the landscape, in the hope that one would finally evolve the ability not to bump into things.

Norns had to learn during their own lifetimes, and they had to do so while they were actively living out their lives, not during a special training session. They also had to learn in a fairly realistic manner in a rich environment. They needed short- and long-term memories for this, and mechanisms to ensure that they didn’t waste neural real-estate on things that later would turn out not to be worth knowing. And they needed instincts to get them started, which was a bit of a problem because this instinct mechanism still had to work, even if the brains of later generations of norns had evolved beyond recognition. All of these were tricky challenges and it required a little ingenuity to make an artificial brain that was up to the task.

So at one level I was reasonably happy with what I’d developed, even though norns are not exactly the brightest sparks on the planet. At least it worked, and I hadn’t spent five years working for nothing. But at another level I was embarrassed and deeply frustrated. Norns learn, they generalize from their past to help them deal with novel situations, and they react intelligently to stimuli. BUT THEY DON’T THINK.

It may not be immediately obvious what the difference is between thinking and reacting, because we’re rarely aware of ourselves when we’re not thinking and yet at the same time we don’t necessarily pay much attention to our thoughts. In fact the idea that animals have thoughts at all (with the notable exception of us, of course, because we all know how special we are) became something of a taboo concept in psychology. Behaviorism started with the fairly defensible observation that we can’t directly study mental states, and so we should focus our attention solely on the inputs and outputs. We should think of the brain as a black box that somehow connects inputs (stimuli) with outputs (actions), and pay no attention to intention, because that was hidden from us. The problem was that this led to a kind of dogma that still exists to some extent today, especially in behavioral psychology. Just because we can’t see animals’ intentions and other mental states, this doesn’t mean they don’t have any, and yet many psychological and neurological models have been designed on this very assumption. Including the vast bulk of neural networks.

But that’s not what it’s like inside my head, and I’m sure you feel the same way about yours. I don’t sit here passively waiting for a stimulus to arrive, and then just react to it automatically, on the basis of a learned reflex. Sometimes I do, but not always by any means. Most of the time I have thoughts going through my mind. I’m watching what’s going on and trying to interpret it in the light of the present context. I’m worrying about things, wondering about things, making plans, exploring possibilities, hoping for things, fearing things, daydreaming, inventing artificial brains…

Thinking is not reacting. A thought is not a learned reflex. But nor is it some kind of algorithm or logical deduction. This is another common misapprehension, both within AI and among the general public. Sometimes, thinking equates to reasoning, but not most of the time. How often do you actually form and test logical propositions in your head? About as often as you perform formal mathematics, probably. And yet artificial intelligence was founded largely on the assumption that thinking is reasoning, and reasoning is the logical application of knowledge. Computers are logical machines, and they were invented by extrapolation from what people (or rather mathematicians, which explains a lot) thought the human mind was like. That’s why we talk about a computer’s memory, instructions, rules, etc. But in truth there is no algorithm for thought.

So a thought is not a simple learned reflex, and it’s not a logical algorithm. But what is it? How do the neurons in the brain actually implement an idea or a hope? What is the physical manifestation of an expectation or a worry? Where does it store dreams? Why do we have dreams? These are some of the questions I’ve been asking myself for the past 15 years or so. And that’s what I want to explore in this project. Not blindly, I should add – it’s not like I’m sitting here today thinking how cool it will be to start coming up with ideas. I already have ideas; quite specific ones. There are gaps yet, but I’m confident enough to stick my neck out and say that I have a fair idea what I’m doing.

Explaining how my theories work and what that means for the design of neural networks that can think, are things that will take some explaining. But for now I just wanted to let you know the key element of this project. My new creatures will certainly be capable of evolving, but evolution is not what makes them intelligent and it’s not the focus of the game. They’ll certainly have neural network brains, but nothing you may have learned about neural networks is likely to help you imagine what they’re going to be like; in fact it may put you at a disadvantage! The central idea I’m exploring is mental imagery in its broadest sense – the ability for a virtual creature to visualize a state of the world that doesn’t actually exist at that moment. I think there are several important reasons why such a mechanism evolved, and this gives us clues about how it might be implemented. Incidentally, consciousness is one of the consequences. I’m not saying my creatures will be conscious in any meaningful way, just that without imagery consciousness is not possible. In fact without imagery a lot of the things that AI has been searching for are not possible.

So, in short, this is a project to implement imagination using virtual neurons. It’s a rather different way of thinking about artificial intelligence, I think, and it’s going to be a struggle to describe it, but from a user perspective I think it makes for creatures that you can genuinely engage with. When they look at you, there will hopefully be someone behind their eyes in a way that wasn’t true for norns.

About stevegrand
I'm an independent AI and artificial life researcher, interested in oodles and oodles of things but especially the brain. And chocolate. I like chocolate too.

86 Responses to Introduction to an artificial mind

  1. Kriss says:

    Why not fake an internal world model?

    You already have a simulated model of the world, because that is the world. Just let all the little brains rent time on this simulation to try out their ideas.

    After all what is the point of building such things in a virtual world if you are not going to grab every advantage you can?

    • stevegrand says:

      Because the *nature* of this internal model is what allows us to think. The real world only exists in real time. By the time the brain gets information about the real world, those events have already happened. The internal model allows us to predict what WILL happen, in time for us to actually do something about it. At a minimum it predicts the likely future about a tenth of a second ahead of now, but when we worry about what we’ll do when we retire, we’re predicting a potential future years ahead of now. Prediction is what intelligence is for.

      > After all what is the point of building such things in a virtual world if you are not going to grab every advantage you can?

      Oh, I can’t allow myself to think in that way for a couple of reasons. For one, I’m interested in what intelligence really is, not what we can fake it to be. For another, faking really doesn’t work – we have 50 years of failed AI to demonstrate that. It’s like doing a really good painting of someone, perfect in every detail, and then wondering why it doesn’t speak. The nature of the representation we use is critical.

    • Dranorter says:

      One of the brainstorm posts, I think, provides a good answer. http://machineslikeus.com/news/brainstorm-3-cheating-doesn-t-pay-sometimes

      Basically, trying to imagine the world more naturally ties in with things like prediction and sensory perception, so things look more elegant when it’s all done in-brain. Or that’s the hope.

    • Daniel Mewes says:

      Then what would be the point in the project?
      Sure, it would still be a fun game. But really it’s more about revolutionizing AI (I hope at least). A simulated world is nice because you can better control the environment and the ways of interaction and it’s also a lot cheaper to build (plus you can employ evolutionary concepts). Ultimately AI should be useful in the real world, and you don’t get insights into that by faking stuff in a simulation.

      • Chani says:

        I’m beginning to wonder if the “real” world matters so much, actually. If we can create a being conscious and intelligent enough to have a conversation with, it might be sad that it can’t fully participate in the real world, but it could still log into Second Life and build itself a life there, it could even get a job and earn money. 🙂

      • Erin says:

        > I’m beginning to wonder if the “real” world matters so much, actually.

        I think defining “real” is far more of a question. Is it any less real for alife to “see” the vertices of a 3d model than it is for a person to “see” photons reflected off of atoms? And for that matter, our eyes are only capable of capturing a small range of possible photon wavelengths. A creature that could “see” radio waves for example, would view almost everything in the world the same way us people view glass — transparent or very close to it. Of course a creature that could see only radio waves wouldn’t live very long in our environment, but it might do just fine in a world where lead was the primary building block of everything rather than silicon and carbon. (In our world, it wouldn’t be able to see much of anything and would therefore be running into almost everything, jumping off cliffs, ignoring predators, etc). The ability to go through almost all common materials makes radio-length photons great for over-the-air radio, but not so good for a vision system.

        Does our inability to perceive radio waves make radio-length photons any less real though? People would think you’re crazy if you said “yes” to that today, but go back 500 years and they’d think you’re crazy as soon as you said the word “photon” — unbelievably tiny “things” that can circle the entire planet in a fraction of a second couldn’t possibly be real! (Well maybe a little further back. Newton at least suspected something like a photon, though the general consensus in his day was of light as a pure wave and it wasn’t until Einstein that the photon was really proved to have a particle interpretation.)

        For an even more fun conception of “real”, consider quantum effects. A “sea” of nothing where “things” are appearing and disappearing completely at random, and even the definition of a “thing” starts to lose its meaning as the line gets blurred between mass, energy and the forces that tie it all together. Its even been suggested that viruses are small enough to perceive these quantum effects! (Some team was planning on shooting a smaller virus through a double-slit experiment at one point but I never found out whether they actually did it or what the results were.)

        So that’s a couple of real situations that don’t appear to be anything like “real” to our senses — only in our imaginations.

        To bring it back to terms of computer geometry, I believe a vertex list could be just as “real” to an entity that evolved in a world of vertex lists as atoms, photons and fundamental forces are to us. Simply an input from a sensory mechanism that gets handed over to the brain and its still the brain’s job to figure out what that input means.

        Of course if you’re wanting to simulate actual human vision, then a vertex list won’t do anything for you. However, if the goal is to simulate basic neuron learning, then I imagine the same learning processes would work just as well for a geometry “eye” as a normal bitmap eye. Kind of like how seeing and hearing both use the same kinds of neural cells and processes, but they’re hooked up to different inputs and so they end up wiring themselves differently to account for their different purposes. A geometry eye would in effect be a new sense!

        Of course whether its practical is another question. In particular, occlusions could be tricky to do without some form of rendering step (and if you’re doing one of those anyway, you may as well just go with the bitmap approach and be somewhat closer to human vision).

      • stevegrand says:

        I agree. Cyberspace is another universe and what’s artificial and virtual for us may be completely real for its inhabitants. Likewise, they’d be disinclined to believe our world was real.

        There’s no way I can simulate actual human vision – it’s still far too big a mystery, so I have to jump in at a higher level of abstraction. “Tricky” is the biggest understatement I’ve heard in ages! 😉 I’m currently using bounding boxes around objects or components of objects that the creature needs to be able to see, and then using raycasting to determine the distance to that BB in each direction as I scan the creature’s visual field. BBs can then have attributes attached to them. So the creature gets a fairly good and direct 3D understanding of the scene for the purposes of navigation and grasping, even if objects are really big, like walls, plus a more limited map of the visual location of somewhat abstract features of salient objects. I think that’s the best balance I can achieve between making the problem hard enough and realistic enough to require real intelligence and yet easy enough to be achievable and computable in real-time. But I don’t have a brain to attach to all this yet, so I may have to rethink vision later.

  2. Ken Albin says:

    While reading your description I just realized that if you accomplish what you are setting out to do then these creatures may well have the potential to experience various forms of mental illness! I wonder if there is going to be the A.I. equivalent of a therapist for this. There is nothing worse than a psychotic A.I. creature!

    • stevegrand says:

      I think they might, and maybe that’s a GOOD thing! Maybe it’ll offer some insights. I’m quite looking forward to finding out just how crazy they can be!

    • Chani says:

      actually, there are already a few norn mental illnesses, like OHSS. 🙂 I think most of them show up in Creatures 2.
      sadly, in Creatures 3, whenever I tried to muck with the brains at *all* I just ended up with a severely retarded norn. or crashed the game. 😦 that brain design was so… brittle. no wonder I never really cared for the norns in that game.

      • stevegrand says:

        I’m kind of selfishly glad to hear the brain got more brittle in later versions. Somehow I never managed to communicate the *art* of creature design to anyone. I don’t know what it was that I couldn’t get across, exactly, but every time they wanted to improve the norns for C2 and C3 they seemed to think only in terms of what they wanted to add, not how they could make this desired feature emerge from something more elegant. So it seemed like it was getting a bit clunky to me (I was trapped in a boardroom by this point). As Frank Whittle once said, the art is to “simplicate and add more lightness”, rather than complicate and add more weight.

      • Chani says:

        yeah… c2, despite its crashiness, had potential – the canny and .. nova norns, was it? .. had a better brain design, it was just the stock norns that were useless.
        c3, though, augh… there was some facial-expression bug in the beginning that made them always look happy, and you couldn’t do anything interesting with genetics at all – and there were so many sprites that hardly anyone even made norns that *looked* different. I think the expanded vocabulary added to the problems – it took so long to teach a norn that it really wasn’t worth it.
        Oh, and I’m still pissed off that nobody gave norns the ability to see water or cliffs. What’s the point of a danger that they can never ever learn to avoid? 😛

        still, there were several great things about C3 – mainly the ability to get sensible error messages out of it. C2 had made me think I sucked at programming, but C3 changed my mind 🙂 so I ignored the norns and dived into all the amazing things that could be done with cobs…

  3. Dranorter says:

    How far along were you in your implementation when you decided you needed to do the kickstarter?

    I’ve enjoyed reading your brainstorms (decided to go through them all yesterday). Helped me think about how the mind could’ve evolved in the first place. The stuff about muscle movement in particular; I feel like one of the first properties of minds (as they began to make more complex decisions than basic nervous systems) must have been flexing muscles without hurting the organism, by learning sustainable motions. Really makes me want to write a tiny simulation for myself.

    But generally I distrust thinking in terms of ‘primitive brains’ vs higher ones! All existing brains function on their own, and just took a different evolutionary direction. Ants are really smart, and I feel like most insects must be somewhat self-aware.

    • stevegrand says:

      > How far along were you in your implementation when you decided you needed to do the kickstarter?

      I kind of knew I was going to run out of money and had to try this quite a long time ago, but I didn’t dare ask for money until I had confidence I could really succeed. There were some big problems. But now I’ve got the fundamental architecture sorted out. Some questions about how it all relates to critical functions such as attention and reward mostly make sense now. I solved my sticky problem with self-organizing coordinate transforms on the yin pathway, got the musculature of the creatures and their basic visual sense coded and checked that the physics engine is capable of what I need. There’s a lot of scary stuff to come, but I’m comfortable I can actually do this now, so it was fair to ask people to help. Just as well – I was almost out of money!

      > But generally I distrust thinking in terms of ‘primitive brains’ vs higher ones!

      Yes, it’s a dangerous game, I agree. Insects and mammals took a radically different approach to life and so require very different kinds of brain, but modern insects are our cousins and just as highly evolved as we are!

      I don’t know what self-aware really means, but I suspect that something quite important happened in the mammalian line that may not exist in other lines (birds being an honorable exception), such that what we normally refer to as self-awareness might not be accessible to insects. But I don’t know. I guess we’ll find out one day, when we know what first-person consciousness actually is!

      • derp says:

        Isn’t it wrong to speak in terms of “highly evolved” and “not as evolved”? Evolution isn’t directional, it’s a function of adapting to the environment.

      • stevegrand says:

        Yes, bad choice of words, you’re right! Naughty Stephen. I just meant that insects are not our ancestors, and so an ant is the product of just as long a period of evolution as we are. We’re all about equally adapted (although it could be argued that ants are better adapted than we are). But you’re right – even trilobites were perfectly adapted to their niche too (until the niche suddenly got shut off), 500 million years before humans. All that really happens over time is that the number of niches expands. Dammit, evolution is a linguistic minefield! Just saying that “bucket orchids evolved to exploit insects” is a teleological disaster, but it’s so much harder to say “the plants we think of as bucket orchids are the products of a bloodline that happened to survive more often than close relatives, because they happened to trap insects whose behaviors had themselves evolved in such a way as to appear to us to “seek” nectar, and hence this bloodline of plants became differentiated from and genetically isolated from their cousins.” Or something! It’d be so much easier to say “God mad bucket orchids to look pretty”! 🙂

      • Dranorter says:

        >Dammit, evolution is a linguistic minefield!

        That’s really interesting!

        But what’s the difference, between requiring we speak in such roundabout ways concerning evolution, and concerning the mind? How sharp a difference is there between when the insects appear to us to seek nectar, and when they actually do so (ie by having minds)?

      • stevegrand says:

        > But what’s the difference, between requiring we speak in such roundabout ways concerning evolution, and concerning the mind?

        A very good question!

        Personally I think there is something that distinguishes volition from reaction; a cognitive system from a stimulus-response system. But it’s a moot point. We don’t know enough about insects yet, but they do seem to be largely reactive creatures (it’s not like they even need to be very predictive, because they’re so tiny and can react very quickly). Norns and insects have a fair bit in common, I think, except for the fact that insects seem to be able to learn by degree but not by type (e.g. the desert ant, Cataglyphis, can learn unique landmarks for navigation, but it can’t learn to use them for something else, or find a new way to navigate).

        It seems to me that insects and mammals (to pick just two) developed very different evolutionary strategies. Insect brains have evolved a certain kind of modularity that enables them to change very quickly over evolutionary time, whereas mammalian brains are less easy to adapt through mutation but have a much more general-purpose kind of intelligence that can adapt within the creature’s lifetime.

        Personally I don’t think a reactive system deserves to be regarded as having a mind. If you think about your own mind, you’re not being aware of your stimulus-response behavior, you’re aware of your thoughts, which are freed from direct connection with what’s going on in the world outside your head. I sometimes try to imagine what it might be like to be something like a cow – just witnessing the world passively. Living entirely in the moment. It’s pretty much the same as meditation and it feels quite empty. Normal life for me is filled with hopes and worries and possibilities and expectations. I don’t think insects have expectations. I don’t think they contemplate or consider. There’s no evidence for it anyway. Ants collect up their dead in funeral heaps, for example, but they most probably do this by a very simple set of stigmergic stimulus-responses. It’s unlikely that they *know* the other ants are dead, and feel respect or sadness for them. They’re just programmed by evolution to act in a certain way in the presence of something that has the sensations associated with a dead ant.

        So I’d say that there is a particular kind of brain that has a mind and not all brains do. There may be several stages in mindness – several kinds of consciousness – but I think there are certain architectural requirements for the kind of thing we tend to think of as minds.

        But who knows? It’s difficult to ask an ant how it feels about life!

  4. Terren says:

    Hey Steve,

    Thought you might be interested to know that in David Bohm’s “Thought As A System” (a very good read), he explores the idea that “thought” is less accurately seen as a process undertaken by an individual, and better conceived as a process undertaken by a human collective or culture. The kicker is that thought, to Bohm, is for the most part quite reflexive (with the odd creative thought being the exception), because by and large we are not really in control of our thoughts – the system is.

    Of course the point of bringing this up is not to say that Norns can think or anything like that. Imagination is as important as you say it is. I think the point is just to say that mental activity that takes place in imagination can be just as reflexive as blinking. In other words, while imagination is an extremely important part of human consciousness (and certainly many kinds of animal consciousness), it doesn’t, in and of itself, confer intelligence. Creative thought – what Bohm considered to be the rare exception – is still a long ways from being understood… although no doubt you have some ideas about that too.

    Terren

    • stevegrand says:

      A lot of this comes down to definitions, I suspect. I guess Bohm must have a somewhat different definition of thought, because I’m perfectly capable of having thoughts in the absence of others and I’m pretty sure I’d still have them if I’d grown up isolated. But certainly at one level I can’t deny that thought is reflexive. Effect always follows a cause, after all. That’s not really what I mean, though. I mean it’s not a simple stimulus-response mechanism. The next moment in our thoughts is determined by the previous moment, but not necessarily by the previous state of our sensory input.

      Imagination is a problematic word too, and maybe I shouldn’t use it. For most people it seems to imply something creative and abstract, which I’d count as just one form of it. I mean it in the sense of “imagine a tree” or “imagine what might happen if you let go of that glass”. Mental imagery is closer, maybe, but implies only the visual sense. Mentation is another word but it’s too general and refers to any kind of transition between mental states. So I don’t know what else to call it. Any ideas?

      • Ben Turner says:

        Well, if you want something that’s more accurate, there are a few several-thousand word passages in Neal Stephenson’s Anathem that crystalize the idea nicely; but, if you want something more digestible, there are always things like “counterfactual reasoning” or “confabulation”. Although both of those probably also connote more creativity than what you want to mean, and to have other side-effect meanings that aren’t central to what you’re trying to convey. It’s really a rudimentary world-model that you have the ability to control, and the larger your conceptual vocabulary, the more complicated the model and the operations you can perform on it.

      • Chani says:

        sounds to me like predicting the future. after all, the ability to imagine probably came from the need to solve problems like “can I climb onto that branch without it breaking and dropping me?” and “how do I get across this river without getting soaked?” – or even, “how can I catch prey that runs faster than me?”

        sometimes you need to be able to predict what the world will do before it does it, or what consequences your actions will have. often our memories are enough, but sometimes they need to be rearranged a bit to give the picture we need 🙂

      • stevegrand says:

        > the ability to imagine probably came from the need to solve problems like “can I climb onto that branch without it breaking and dropping me?”

        Yes yes yes!!!

        Well, actually I’d go even further back than that. I think prediction is the whole point, but I think it started with the very simplest kinds of extrapolation that evolved to cover up for signal propagation and processing delays. You can’t even pick something up without imagining the future. You can’t watch a car go by without turning your eyes to where your brain imagines it will be by the time they actually get there. But I think this capacity turned out to be so much more valuable than evolution could ever have “expected”, and your illustration of climbing onto a branch hits right at the point where predictive representations really took off and created mindful action. That’s a really nice example. And I like “sometimes they need to be rearranged a bit”, too – it cuts to the heart of what I’ve been thinking about.

      • Terren says:

        First off, let me say it’s been a few years since I read Thought As A System and someone who just read it would probably be cringing at my characterization of it. That said, anyone who learns language is not isolated; anyone who learns that they have a name and an identity is someone who is plugged into the “system” to some extent. So to that extent your thoughts are part of a this much larger organization or system. It’s not that they are controlled by the system, any more than one of your skin cells is controlled by your body; it’s that the best way to appreciate the function of the skin cell is as a part of a collective organism. And as such, a skin cell wouldn’t be a very good skin cell if its behaviors weren’t reflexive with respect to the overall system’s organization, even as the skin cell probably feels like it’s choosing what it wants to do.

        But really, the whole ‘system’ aspect isn’t that important in this context because your creatures won’t be learning language. The reason I brought it up is to show that there is a way of stepping back to see a bigger picture and appreciate the possibility that even human thought is largely reflexive (with respect to a larger organizational context). In short, imagination is necessary for choice, but not sufficient.

        I don’t have a great suggestion to replace “imagination”, but I relate to what you mean by the term “ability to visualize” or perhaps “visualizability” if you want a clunky word. However, one unsatisfying thing about many of these terms that invoke visual metaphors is that *sight* is not necessary for imagination. Mole rats are basically blind but I’m sure they have very sophisticated spatial models nonetheless with the ability to imagine, plan, etc. In that light (so to speak) one term that pops out is “mind projection”. Projection is ok to me because it’s more of a mathematical term than a visual one.

      • stevegrand says:

        Yeah, it sounds like we’re talking at somewhat different levels of abstraction (not for the first time). My use of these words is at a pretty low level compared to most people’s apparently. Like I said in response to one of Chani’s comments, just picking something up requires imagination in my terms – the generation of a goal state – so I’m definitely using terms like thought and imagination at a much lower level than Bohm.

        I wonder if this difference in use of terms has anything to do with the way we think ourselves. Some people think predominantly linguistically, and that’s clearly a high-level socially-generated thing. I’m a visual thinker, so most of my internal activity is in pictures and movies, not words, so maybe that means I apply the same terms to something less symbolic and abstract? I’m clearly just a very lowly thinker… 😉

        “Necessary but not sufficient” – I agree. There’s a slippery slope there, though. Yes, we need more than just imagination, but it doesn’t mean imagination is a module to which some other mechanism needs to be added. I think the neural implementation of imagery or whatever we call it actually gives rise to thought, in and of itself. It’s a fully integrated process. Can’t quite explain what I mean, though.

        My new creatures may learn language. I’m reserving judgment. Nouns, verbs, adjectives and adverbs are just reflections of the process of thought – how to use something to do something in some way to something else. So it may be possible for this neurological grammar to communicate itself between creatures using a verbal grammar. I had the rudiments of this in Creatures, but I didn’t have time to explore it properly and their brains were too reactive.

      • Terren says:

        Yes, I think have confused the issue by talking about thought and imagination at a much broader level.

        Ultimately, I think the nugget I am after is that “thought” by itself – at whatever level – doesn’t necessarily connote intelligence, even if that seems counter-intuitive from our perch as thinking, intelligent beings. But in terms of a design for an artificial brain, I find myself wondering how one would connect the dots between the ostensible ‘goal system’ of the critter (which would involve crossing the river, for example), and the result of projections from the critters current ‘experience’. You may have that all figured out, but for me that is something that has to be made explicit or emergent in the design. I see no reason why one couldn’t end up with a critter that has the ability to make mental projections but does not profitably use them.

      • Dranorter says:

        I would prefer to think of the larger system, with respect to which thought exists, as the *species*, not just the community, because that’s a less human-centric concept. Thought is a special part of the more general process of an organism surviving and propagating itself, by which I basically mean a species propagating itself. The only truth this gains me is that the body and the mind adjust to each other’s capabilities over generations, and, well, the mind had better be able to learn to do what it needs to do very dependably unless there is something like imitation or language which allows non-genetic (“memetic”) inheritance.

        Personally I don’t think the structure of language is all that basic to thought, since we haven’t really decoded Dolphin language, and other hominids who learn sign language don’t have a similar sort of grammar. If language structure corresponded closely to brain structure I’m guessing there would be more difference between people who speak different languages, and besides that I think we would have different mental structures for past tense and future tense thought, singular and plural, things like that. Instead what we see is language differentiating lots of things we don’t necessarily care much about while thinking, because we have to give other people context before the same thought will make sense to them.

        Norn language was always fun, but it was more of a direct line to the norn brain, as opposed to learned, structured symbols. If ‘Naven’ language comes to be I certainly wouldn’t mind something like that, if these more complex brains could ‘think together’ by exchanging those types of signals; but it would be nice if it were an evolvable or learnable capability, genetically mutating to connect thoughts and words differently, or learning over time how to do so.

        Well that’s fun to think about! Hope I’ve not written nonsense.

  5. Abram Demski says:

    It’s funny that you put it in the terms you do, because the grant that I’m going to be hired under soon is exactly that: research an AI visual/sensory imagination system capable of integrating with a specific larger cognitive architecture based on factor graphs (which are connectionist, but not neural).

    • stevegrand says:

      Excellent! Then we’ll be able to swap notes! Where are you going to be working, and with whom?

      Mind you, I’d probably take issue with the idea that one can just bolt something like that onto an existing cognitive system: it should BE the cognitive system. That’s the wrong kind of modularity, imho. Dammit, the digital computer has a lot to answer for… 😉

      But I hope you have a lot of fun! Good luck.

      • Abram Demski says:

        I’ll be working under Dr. Rosenbloom at USC.

        It won’t be a simple plug-and-play module, by any means. It will be closer to a redesign of the entire system to support those capabilities.

        Factor graphs are a mathematical structure which bears some resemblance to neural nets (and I do personally think that it’s what human neurons are “really doing”), but which surprised the research community a few years ago when it became apparent that they were the structure underlying many diverse algorithms from classical AI, machine learning, signal processing, and statistical analysis. For the most part, this has just been a nice fact, which has inspired research but not been used for a unification of these different algorithms: the “narrow AI trap” still operates, meaning that it’s still more efficient to implement a special-purpose system to do one thing that you want, rather than a general-purpose system that can do everything.

        Our project will aim to be mediocre at everything, rather than good at one thing! However, we hope to gain something in the exchange. Using factor graphs should allow the different classical algorithms to interact in a fluid way.

        I’d suspect that your neural network topology could be implemented in factor graphs, and might gain something in terms of brittleness (particularly under mutation), because the graph would obey a more exacting probabilistic semantics. However, it would be a bit of work, and I’m guessing that you are comfortable in your current neural platform. 🙂

        In terms of my research, the aim is first to make our factor graphs capable enough to re-implement some standard algorithms relevant to visualisation. Currently I’d like to be able to do both Fourier sound and video analysis, and Hinton’s deep belief networks (which are similar to Hawkins “On Intelligence” stuff, of you need a comparison). These are narrow algorithms, but they will be interacting with the wider system in a dynamic way to achieve goal-oriented thought.

  6. Trevor says:

    Dear Steve,

    I don’t want to derail your thought processes, so if this question threatens to do so, please ignore it.

    I just wonder whether you can see any relationship between your work and Craig Venter’s efforts to “boot up” a cell using synthesised DNA?

    They both look to me like attempts to uncover what is essential to “life”, but coming from rather different directions.

    • stevegrand says:

      Yeah, they are, but I think the difference lies in what kind of life we’re talking about. Venter’s work is going to tell us the minimal recipe for making a cell, which is very exciting and important. But at the same time, it’s the minimal recipe for one level of life here on earth at this period in its geological history. It doesn’t necessarily tell us much about life in the round. It doesn’t really elucidate life as a concept. Alien life might operate on radically different chemistry. Life may not even be confined to chemical systems. So Venter’s lab is to Artificial Life (with capitals) what experimental physics is to philosophy, I guess.

      In this project I’m after a different kind of fish – not life but mind. What is it that allows a physical brain to give rise to an emergent mind? I don’t think you can have a disembodied mind, so I need all the biology, but the emergent phenomenon I’m looking for is mind rather than life. A cell doesn’t have a mind, although my friend Dennis Bray might disagree with that, so “wet” A-life is in a rather different realm than what I’m interested in.

      Hope that makes sense.

      • Colin Wright says:

        Have you read “What does a martian look like?”; it’s an interesting book in exploring the idea that life can exist in non-chemical media and how we might not even recognise intelligent life when we meet it since it may exist in a radically different media and have very different perspetions from us, it’s intelligence would also be evolved to deal with it’s own frame of existence.

        An example given in the book is the idea of life existing as complex systems of self-replicating magnetic vortexs in the plasma of stars.

        the question is how complex the media needs to be to allow the right amount of variation.

        This is why I have no issue seeing the second order simulation of life as essentially a real form of life and the same applies of course to intelligence.

  7. Trevor says:

    Dear Steve,

    Makes lots of sense and elucidates things nicely. Thanks for taking the trouble to reply.

  8. Chani says:

    Doh. wordpress is cutting off replies from our big threads, so I guess I’ll start again down here. 🙂
    I was going to say something about language.. hmm.. in general, I think language can help with thought, but it’s clearly something secondary.

    sometimes stress makes me forget words fairly commonly, and it’s an interesting process to watch… my mind casts around, jumping from one memory to another, trying to catch an echo of the word that was said when referring to the object. I can usually describe the object, i can see it, but for that second or two the word is missing. this makes me think that words are labels that point to a set of memories. the concept of “bed” seems to me to be a knot of memories of beds and going to or from them… when I want to say I’m going to bed, I seem to think first about the place I’m going, then the bed that I expect to be there and how nice it will feel to be lying down, and then my brain pulls up the words…
    even when the word itself is missing, the concept is still there. I’m not sure what a “concept” is, though – whether there’s a special represenattion for one, or if it’s just a commonly-visited intersection of memories or what… but the word just seems to be yet another memory (or set of memories) linked into it…

    we do know that language influences thought, though. it can help to differentiate concepts, and it can lead to prejudices too (think of the similarity between “right hand” and “right answer”). I think it can probably help with imagination too, suggesting new ways of rearranging your memories and imagining things, and making it easier to hold several nontrivial concepts in the mind at once. or maybe it just strengthens the short-term memory and makes us more aware of what we’re currently thinking, making it easier to keep track of where we were.

    • stevegrand says:

      I know what you mean about trying to catch the echo of words. I’ve been known to forget my own name when someone pushes a TV camera under my nose. I couldn’t for the life of me remember the term “hidden Markov model” earlier. I knew the first word was some kind of qualifier, and had a slight sadness or absence to it. I knew the second word was a name, perhaps Russian. And I knew the last word was something like machine or mechanism. Needless to say, it came back to me when I wasn’t thinking about it!

      I agree that language is secondary. I do think it’s a reflection of the process of thought, which in turn is a reflection of the nature of actions. But that doesn’t mean we think in words most of the time, just that grammar has the structure it does because thought has the structure it does.

      It’s interesting what you say about words differentiating concepts and leading to prejudice. I’m not quite sure what a concept is yet either. Well, I have a vague idea but I don’t understand the details. But it does seem they’re quite fluid unless words pin them down and draw boundaries around them. And I empathize with the pejorative “right hand/answer” example, being a lefty! The way words act as markers in working memory to keep track of thoughts is interesting too – I’m hoping to explore that further with these creatures. Fascinating stuff!

      • Dranorter says:

        One of the weirdest things about language forgetfulness is how it’s contagious. Not that I’ve seen any research on the subject, but almost every time anyone asks me to try to remember a word which they’re trying to remember but can’t, I can’t either, unless I hit upon some clever way of associating my way to it (ie, a sentence I’ve heard it in, a place I’ve used it, &c.) I guess I don’t have conclusive evidence, but I’m somewhat convinced our empathy for the other person’s inability to remember gets in the way of our own attempt to remember.

      • Erin says:

        It sounds almost like attention combined with a failed initial search — your brain looks for something “similar to” or “relevant to” what you’re trying to remember, but perhaps with a faulty starting point for “similar/relevant” or the link between the concepts and the label in question is relatively weak compared to surrounding links. So you end up focusing your “attention” on a certain part of the brain looking for the information you want (and possibly going on tangents if you come close but not exact and having to turn back a few times), yet the actual information is stored somewhere else. Then later once your brain is more relaxed and not focusing on a specific (wrong) area, suddenly the label your looking for becomes accessible again.

        This kind of search mechanism must also keep going though, even after the focus has been relaxed, as very often in these situations the information you were looking for will suddenly pop into your head hours or even days later. Whats really fun is that sometimes you end up in a situation where by the time you recall the information you have forgotten why it was important, and you end up having to do a “reverse” search to recall why you wanted the the info in the first place!

        In terms of being on a camera, your attention would end up being focused more on “do I look alright”, “am I going to cuss without thinking”, etc — whatever concerns you have. Comparatively, remembering your own name probably isn’t among your top concerns unless its been a problem for you in the past (how often does one forget their name, not counting drug- or disease-induced problems?) But even if that is amongst your main concerns, you can still run into the above — the concept of “remember my name” might not be strongly enough associated with your actual name!

        And then of course you could start fearing that fear of forgetting your name will cause you to forget your name and then you’re in real trouble! We seem to have a mechanism to deal with this sort of runaway anxiety (normally. I imagine this is where anxiety attacks come from when things aren’t being normal.) We can take an almost outside view of ourselves, realize that we’re starting to panic and force ourselves to calm down a bit.

        Anyway I just found out about Creatures and Steve a couple days ago and certainly haven’t made it through any large number of blog posts yet, so I hope I’m not being too redundant or off the mark ;).

      • stevegrand says:

        Yep, that sounds right. Who was it said “we have nothing to fear but fear itself?” Nothing to fear but the fear of fear itself is so much worse. I’ve seen a fair bit of runaway anxiety in recent years and it’s interesting how that ability to monitor ourselves can dry up sometimes. Prefrontal dopamine problem, don’t ya know! The best solution I’ve found for myself is not to agree to go on TV! 😉

  9. Warren says:

    Hi! I’ve been generally following your progress since I read ‘Growing up with Lucy’, which was a brilliant and inspired look at the mind and body. I’m a great admirer, and hope you have some good luck in the future!

    Having just read this post, I thought I would make my first comment… 😉 I’m not technically knowledgeable enough (or smart enough!) to work in AI, but I’m fascinated by it, and (think!) I can usually grasp the major concepts in an abstract way… So I hope I don’t sound too dim if I make a couple of observations lol! Sorry for the long post…

    Reading a few of the comments, and the debates about how we view our world with our imagination, I was struck with the fact that a lot of people seem to think we ‘predict’ everything like a computer simluation. I agree that what we see from our eyes is essentially a predictive image from moment to moment of what we expect to see, more than actual ‘reality’ as such. But I think this also give the false illusion that most of our predictive/imaginative thoughts are based in a ‘simulation’ of the world around us (Hence ideas like using a simulated world). Scientists always say things like ‘our brains can perform billions of calculations a second’ or somesuch…

    Personally, I think that this is both over-stating our abilities, and under-estimating them. I think our brains ‘average’ things, based on experience. The old ‘throwing a ball’ analogy is probably best. Someone throws a ball to us, we predict where it will go, we catch it. Some scientists seem to imply that our brains are performing all the necessary calculations to catch the ball. I don’t believe it’s doing anything that complicated or mathematical. Throw a (very soft!) ball at a baby, and even when it tries to catch it, its co-ordination isn’t very good. Gradually it improves. A child is usually better, but not as good as an adult. An Adult can improve by practice, because practice is building up their experience of averages. It also explains how even the best people can make mistakes. We can ‘predict’ with those averages, but we are not flawlessly using some mystical behind-the-scenes complex mathematical formulae, dotting out the trajectory on a mental graph.

    A perfect example… Being an artist, the other day I was practising drawing heads. My girlfriend looked at a circle I had drawn, and asked how I’d drawn the circle so precisely. The thing is, I hadn’t. As any practised artist will tell you, our brains are programmed to pick out averages, and turn them into expected shapes. That’s why concept drawings, before being polished, have a wonderful collection of missed and excess lines shooting off all over the place. You draw a circle by doing a circular motion MANY times, over and over in the same place. Each circular motion of the hand is probably fairly inaccurate, but by the time you’ve finished, your brain is picking out the average, and seeing a rather nice shape that it expects to see. Repeated motions find you following that imagined guide circle, and strengthening the lines that represent the circle best.

    I think that the human mind works in a similar way with the ‘ball’ example. We don’t ‘predict’ where the thrown or bouncing ball will arrive. We average out an expectation based on years of seeing balls thrown through the air. We ‘see’ a perfect circle, not because it’s there, but because it’s what we expect to see.

    Someone else mentioned primitive man and his expectations of whether a branch will break or not when he climbs. This would be based on his knowledge of averaged expectations. He’s climbed a few when he was younger, he’s seen others climb, he’s seen different branches break for different reasons at different times. All these things combine to make not only general expectations, but the ability to then ‘imagine’ each of those possibilities. He might think ‘that branch can hold me’, but he can also abstractly think ‘but if the wind were blowing, even though it isn’t now, it would break’. He’s not scientifically working these things out. He’s using combined experiences of every kind, and the knowledge he’s gained from them.

    This also leads to my next thought, which is that we don’t even really ‘visualise’ things in the typical way we think we do (TV and films with dream sequences that show a dream world looking like the real one, have a lot to answer for!). When we imagine that ball thrown at us, or a concept designer imagines an alien creature’s face, we don’t ‘literally’ see those things. We have a strange ‘feeling’ of their expected look (whether predicting where the ball will arrive, or changing variables to get the imagined creature face we want). Our imagination only produces the barest minimal knowledge needed for any given thing.

    Imagine moving a chair across the room. You have a vague feeling of space and shape in relation to the place you plan to put it. You don’t actually imagine the genuine shape, the slats in the back, the texture of the wood, the light on its surface. You CAN imagine those things, but only when you choose to. Imagine a room full of objects. Your brain instantly came up with a mental approximation of a cluttered room, you can feel it. But did you know what any of those objects were, until you focussed in on one? Most likely when you read that last sentence? And did you really ‘see’ it in your mind’s eye? Or was it more of an abstract knowledge and feeling of those shapes, much in the way words on a page can symbolise images or concepts, without actually being visually representative of them?

    Our imagination is like our eyes in a way (without the actual ‘imagery’). A roving camera with multiple levels of limited detail (from precise to exceptionally vague) but with a limited field of view, zooming in and out all over the place. The difference is, we can change what’s in it. None of it ‘exists’ except for the moment of imagining it. So there would be no ‘simulated’ room matching the real one we’re in. Because that would imply constant structure. Simulating human imagination has to be based on the ability to create something utterly new, from moment to moment, with its only contributing information source being our averaged memory and experience, guided by our desires (Whether that’s the desire to write or paint, or the desire to catch that ball).

    Sorry if I’ve rambled there! 🙂

    • stevegrand says:

      Yes, I agree with everything you say. Our brains DO have a model of the world but it’s not a precise mathematical model and it works in a fuzzy way, as you point out. I’m never quite sure how literally some researchers expect us to take their statements about the amazing amount of math we’re supposed to be doing when we catch a ball, etc. It is an analog computation, but it’s not math in the formal sense. And you can’t just equate it to the ‘mathematical’ transformations performed by individual neurons. I think it’s fair to say that the brain performs calculations, but only as a metaphor, and people do seem to take it too literally sometimes. That’s as dumb as thinking how clever a telephone wire is for working out exactly what shape curve to become as it hangs between two poles!

      And I agree that mental imagery is non-photographic. Details come and go as you need them; like you say, often we get more of a general sensation of things than an image. Of course real vision is much more fragmented than most of us realize, too! We can’t see anything sharply outside the middle degree or two of our visual field, for a start, and yet we THINK we see a clear, three-dimensional world made from discrete objects. I think the important thing from my perspective is that imagining things makes use of the same parts of the brain we use for seeing, but this top-down influence is pretty vague and volatile. We seem to have only a limited ‘spotlight’ of attention with which to generate internal images, so we can choose to see the whole picture vaguely or one detail with only limited context, and when we recreate a scene we aren’t always triggering primary visual circuits – sometimes we just have a general sense of catness, or furriness, or whatever.

      These limitations and weirdnesses are fascinating! It’s computation, Jim, but not as we know it! Thanks for the observations.

      • John Harmon says:

        Great discussion… I have a take on how the brain generates prediction: that is, memory activation creates prediction. With your ball flight example, seeing a ball coming toward you triggers a larger “ball flight sequence” memory. So “first 2 seconds ball flight” memory triggers a larger (most strongly associated) “4 second ball flight” memory, which in turn triggers the “last 2 seconds of ball flight” part of the memory. This last 2 seconds is the “prediction” of what the ball will do next.

        Another point: the “4 second ball flight” (or “10 meter ball flight,” or however you want to characterize the ball flight memory) is a generalized or invariant memory. Meaning, it is not a ball flight episodic memory, but a collection of similar “ball flight” memories forming a range of “ball flight” pattern.

        Viewing prediction as a part-whole memory activation process can be extented to ANY predication that occurs in the brain: perceptual (what is that falling vase going to do next?), visual field (what is the visual field going to look like next if I take a step forward?), social (what is he going to say next?), etc…

        Just my 2 cents… great discussion…

  10. well says:

    Premise: The best tactic to obtain results is indeed to get inspired from nature, so I think you are on the right track with your project.

    But for a moment let’s consider life as a way in which matter is organized. That does not mean I’m ruling out a god: if you create time, evolution in time is part of a creation and not an alternative to it.

    A stable configuration of matter is more likely to be found than an unstable one, by definition.
    A configuration which grows, like a crystal I mean, is more likely to be found than one that doesn’t grow, all other things being equal there will be more quantity of of the latter.
    A configuration which is able to grow after being split is more likely to be found.
    A mostly stable configuration which trades a little stability for adaptation to different conditions, is more likely to be found. It may harness energy (from the spiritual plane too, if such thing exists), may develop complex reactions, may form a colony of more specialized cells, may develop memory, predictive abilities, it may include a model of itself in its thought processes to improve their precision.
    (I kind of rephrased “be fruitful and increase in numbers and fill the water in the seas…” )

    So theoretically the configurations called life, which from a statistical point of view should not exist because of their complexity, could become relatively widespread because of their characteristics.

    If there is a theoretical possibility, why not trying to simulate such process? One doesn’t need to recreate the dynamics of real world.
    But still one must respect “time” and “chance” as selectors. For example, one could start a massive “Core wars” simulation after filling the memory with random data, and let it run, injecting random data from time to time. Order out of chaos…

    • stevegrand says:

      > So theoretically the configurations called life, which from a statistical point of view should not exist because of their complexity, could become relatively widespread because of their characteristics.

      And you and I are here as an existence proof that this happens! Yes, that’s a big topic. Certainly the world doesn’t have to be like ours, although there seem to be constraints on what sorts of artificial universe would work. You might be interested in my friend Bruce Damer’s project, Evogrid (http://www.evogrid.org/index.php/Main_Page).

      For my part I’m certainly interested in that whole topic at a theoretical level, but for this game I have other things in mind. There are plenty of abstract evolutionary systems out there, but my main research interest is intelligence, so this is a project about that.

      A prime mover god like you mention has its own problems, but I’ve said enough about religion for a while. I completely agree that self-organization is an unstoppable rule, and it’s certainly possible to achieve it in simulation (although I’m rather cautious about the amount of time it can take). It’s just not the focus of this project, which is about the brain and levels of complexity that would take at least many thousands of years to evolve from scratch. There’s plenty of room for other people to explore this area, though! Maybe you. It’s a fascinating subject, I agree.

  11. James says:

    Did the brains of Norn in the Creatures series function differently when they were asleep, or was the sleep state imposed as a biochemical balancing tool? I ask because, through my admittedly limited trawling, I haven’t come across an alife simulation that uses more than one true state of “conciousness”; which I found odd because we can reasonably argue that allot of modelling, predicting and learning happens when animals are asleep.

    I’ve always (and probably naively, as I admittedly have little knowledge of the subject) thought that it’s likely we keep a limited cyclic model of our stimuli and actions throughout the day in a network that has a small “voice” on the rest of our CNS. During sleep, when allot of our CNS shuts down, this “voice” or impact becomes greater and we start to run our thoughts on the merger of information (or experience) cycling throughout this network. This could explain why our dreams are often an adapted (and sometimes weird) version of our experiences during the day, and are more likely to based upon the most recent stimuli before sleep, and possibly why we are capable of dealing with situations that are significantly different from what we have experienced.

    Waking intelligence is obviously critical in a project like this (no one wants to watch their creature sleep for hours on end), but I wondered if the brain state during sleep is something you’ve thought about for your new project or what your ideas are regarding this topic?

    • stevegrand says:

      No, when norns slept they just shut down and became relatively insensitive to stimuli. But then they had no consciousness to alter.

      This time, though, I’ve a strong feeling I’m going to need them to sleep. There are certainly memory consolidation tasks that would be more efficient during sleep and might mess up their responses if they were happening whilst awake. But it may be necessary to reignite experiences as dreams, too. I don’t know yet, but I’m pretty sure sleep will be more than just for effect. I do hope so.

      • Dranorter says:

        Just from what you’ve written about their imaginations, I’d imagined they’d have a sort of continuum between daydreaming and acting which wouldn’t be far off from having a sleep cycle. If such is the case, one might simply try to figure out how to make them ’emergently’ prefer to close their eyes (and reduce stimulus in other ways, for example lying down) when going in for some heavy daydreaming.

        How real of a sense of touch are they going to have?? I’m imagining that learning to move one’s own muscles would largely be based on sense of touch.

  12. Vadim says:

    I was wondering,what are your thoughts on how to make a brain capable of figuring out multi-step processes? Norns are very simple minded. “I’m hungry, there’s a lemon, I eat it”.

    But what if that needed more steps, for example a lemon that needs to be peeled or a coconut that must be cracked open before it can be eaten? Or even more complicated ones, like figuring out that to get to the food, taking the elevator might help? How do you design a brain that can deal with that kind of thing?

    • stevegrand says:

      Whoops, sorry – missed this one. I have lots to say on planning and conditional sequences! But it’ll have to keep for later because you’d need to know a lot of the context first. Definitely these creatures will be much smarter in that regard, although I still have some mysteries to solve and I don’t know how far I’ll be able to take my ideas. It’ll all emerge as the project develops and we can discuss it on the tester site.

      • Ben Turner says:

        Hey Steve – just want to make sure I’m not missing something… I assume the tester site you mention is to-be-created? Actually, if it’s a place where the kinds of people who are interested in your work get to interact with you and each other, maybe you should just say it doesn’t exist either way, because that sounds like it would have the serious potential to derail my dissertation!

      • stevegrand says:

        Hi Ben, No, you’re not missing anything yet. I have a site ready to go live when the funding period is over, and I’ll give all qualified backers an account on that site (can’t do that until I’m able to request emails etc, on April 8th), with permissions appropriate for their reward level. It exists, but doesn’t have much content yet – I’m working on it. It has a forum, bug reporting, wiki docs on the theory and API, etc. The content will grow over time. I’ll do my best to derail your dissertation! 🙂

        Actually, you’re not the only one to have asked this – I’d better do a KS update about it.

  13. kerome says:

    One comment I would add on the nature of language: it is not entirely secondary. Much of the internal mental model is assumed knowledge passed down linguistically, and so language acts as an input carrier. That internal mental model is “fuzzy” and wordless, and seems to act mostly on a conceptual plane, but ‘knowledge crystallisation’ happens at the point of verbalisation, where you translate an aspect of the mental model back out into language.

    Have you ever had a moment of realisation when the words coming out of your mouth clarified something for you that you already knew? This has happened to me a number of times, and made me realise that language is a feedback loop into the conceptual model as well, even when no other people are participating in the conversation.

    Also, it would be wise not to underestimate the influence of the exact language on the internal mental model. As you would expect from the above, language colours the way you think – german for example is a very precise, practical language, and the Dutch word ‘gezellig’ does not have an exact English equivalent. The linguistic-to-conceptual processes of the brain are quite heavyweight…

    Anyway, a fascinating discussion and blog, I shall watch with interest 😉

  14. It’s so good to here someone else has been thinking about this. I started working on a new form of neural network a few years ago that in theory should be able to generate the sort of system you discuss. Granted I don’t know for sure if it will work or not, but the early models resulted in a system that preformed mathematical estimations in a pattern far more similar to that found in humans and other animals than traditional neural networks do. I’m strongly tempted to drag it out and start building part two now that I have taught myself to program and don’t need to rely on other peoples tools.

  15. Pingback: How does thought work? « Great Ape Thoughts

  16. srgrimmR, Grimm says:

    I’ve been searching for further news as to how Lucy turned out. I read both books with the greatest of interest….it seems we have made similar discoveries in in the hunt for machine intelligence, if not consciousness, which is actually about where I am now in my designs.
    I also think it was a strange coincidence that you and Jeff Hawkins would come up with the idea of how important the layers of cortex turned out to be…..and I haven’t heard much from him either in the last couple of years. But I digress, I won’t bore you with idle chat about things you already know about….my main question was about Lucy, a most fascinating project of which I really hoped to turn out being worth all the trouble you took to build and study her.
    Personally, I think it can be done. Best of luck in the future on other projects….as well as books.

    • stevegrand says:

      Thanks! Actually Lucy sort of died when I ran out of money. Just as I was finishing writing the book I got a fellowship at NESTA, which gave me a year’s money to start work on Lucy II. But year isn’t very long when you have previous commitments and have to sort out how to earn a living once the year is over. Plus I got bogged down trying to create decent muscles. So I didn’t get as far as I hoped on Lucy II and a bunch of other personal things really got in the way. But now I’m working on a new game – not robots, just virtual creatures, but nevertheless using all of the ideas I was beginning to grope towards while I was working on Lucy. I’ve added a lot to the theory since then and I really hope it’s all going to work out! I’ve been working on it full-time since April and I’m just starting on the meat of their brains now. Fingers crossed.

      • Colin Wright says:

        I was going to ask what happened to Lucy II having just finished your book. Never mind though hopefully you will make enough from Grandroids to continue your work in whatever direction in the future.

        I found growing up with Lucy very though provoking by the way, are you going to bring out a book about your work on Grandroids once you’ve published it? I’d be very interested.

        Unfortunatley some bad things happened mid last year that drew my attention away from this project for a while and thus stopped me pledging through paypal to join your funders site so I’m going to have to wait unless that offers still open.

        Anyway best of luck Steve and I look forward to seeing what you produce.

      • stevegrand says:

        So sorry to hear about the bad things, Colin! Hopefully I’ll write a book about the new project – if it actually works (and that’s a big if!) then I’ll really have to write a semi-technical book, because I think the ideas might be quite important to share, but they’re way too complex for a paper! I’m still gratefully accepting a few donations from stragglers who wish they’d seen the kickstarter thing, so if you’d really like to do that, drop me a note to steve at cyberlife hyphen research dotcom. Hope the bad things have gone away. Thanks for the good wishes!

  17. Mysti says:

    By the way, I’m very excited about Grandroids, I’ll be sure to buy both Grandroids and Creatures 4 when they come out!

  18. Hwdge says:

    This is a really interesting project but i’m also interested in the intelligence of the gandroidians (i don’t know is that what is called” would these little critters really learn and if they did could they be taught basic mathematics, not the the abstract form we know algebra but in, if i eat four bananas and i have 8 will i be able to survive eating this many bananas? Also Locomotion of these creatures fascinate me, could it be possible to evolve in the correct environments an aquatic creature?

    • stevegrand says:

      Hopefully aquatic creatures will come later. Their bodies can only evolve within a given body plan, so quadrapeds can’t turn into fish (but then real evolution is far to slow for that to happen anyway). As for basic number, I don’t know. I doubt they will have a sense of quantity but I may be wrong. It’s still a bit early to tell.

  19. Colin Wright says:

    Steve have you considered taking inspiration from comparison of bird and mammal brains. Evidence seems to be showing that some birds are capable of showing some pretty high level cognitive behaviour although they have developed a different brain structure to do it no neocortex. Comparing the different structures could give some clue as to what these different neural circuits have in common, and thus what aspects of both are likely to be essential components of such a system.

    • stevegrand says:

      Yes, birds are a bit of a bother – some of them are easily the brightest things on the planet, pound for pound. But it turns out they DO have neocortex, kind of! I can’t find the article now but I remember someone recently examined their cytoarchitecture and found that some part or another had basically the same structure as mammalian cortex. Mammalian cortex itself varies quite a bit between species and that certainly led me to a train of thought, but it’s so damn hard to get definitive data on the circuitry.

  20. Darrell says:

    Hi Steve, Hows is the project going? I’m sure you may have thought of this, you may still need some kind of Neural Processing Sequencer, I dont mean DNA , something you add at the start not the finish? My reasoning is that we are very good as Humans judging / understanding abstract sequences, which I know your aware of.
    I have a feeling that the concert in our Brain’s is a construct of Brain Region’s that Moves to an abstract level which gives us a sense of self by giving us prediction’s of what comes next in neural processing pattern’s. The greater the insight we have on those pattern’s we can guess, gives us a greater ability to alter our own thought processes over time. I generally think about just how interconnected Human Neuron’s are, and how specialist regions in the brain help us form an overall construct. The Mental and virtual Images of your Artificial life may generate could be hammered by connective limits between neurons and the size of Brain regions. Maybe one way around this is getting a sequencer to alter the neural model as if it was another Brain area responding to different region’s of the brain in concert, in effect compressioning the area’s of the Brain until needed? Thanks for listening Steve Good luck again 8)

    • stevegrand says:

      Thanks Darrel. The prediction of what comes next is key, I agree. I’m not quite sure what you’re suggesting but I certainly am getting hammered by speed limitations, if not memory. It might be that I’m already doing something like that – I have several pretty complicated mechanisms for sequencing everything.

  21. liluakip says:

    I realise that you’re more referring to general abstract modelling here, but I was curious if you were aware of the non-universality of visual imagination among humans. In the late 19th century Francis Galton was astonished to learn that many of his colleagues denied the very existence of mental imagery! More modern quantification attempts have found a great diversity in the vividness of imagination, associated with skill in recalling photographic details.

    So while some kind of internal model which can predict the environment’s behaviour seems essential to consciousness, “imagery” in a strict sense does not.

    • stevegrand says:

      Thanks for the link – that’s fascinating! I do know someone who claims to have no mental imagery at all, and I’d noticed that there are many kinds of internal world (mathematicians and poets seem to have little in common in terms of their inner experience, say), but I had no idea Galton had already been there and done the research!

      I’m a visual thinker myself, but like you say, I’ve been assuming that there are other modalities available as well and I’ve just lumped them all under imagery. Some people claim to think linguistically and so I’ve been assuming that was a kind of virtual audition instead of vision. ‘Mentation’ might be the better word but I’d thought ‘imagery’ would require less explanation. That, though, is probably my mistake as a visual thinker!

      I find it hard to see how the brain could perform many planned activites without some kind of mental image, so I wonder if the multimodal mechanism is there in all of us but we differ in the degree to which we’re conscious of its parts? It’s hard to see (!) how the thought ‘can my car fit through that gap?’ could be computed without some kind of mental attempt to rehearse the event and ‘see’ what happens. Even the most abstract, symbolic reasoning would still have to estimate the size of the gap relative to the size of the car, and without a ruler that’s necessarily a visual feat. So maybe we’re constantly computing in terms of all our modalities at all levels in the hierarchy, including proprioception, balance and acceleration, motor patterns, etc. but we differ in our awareness of it? I hope it’s something like that or my theory is wrong!

      What do you think?

      • Mel says:

        I personally think that we all use a variety of modalities but are simply more competent at a specific one and thus are more aware of it. In my psych lecture we were debating about whether language or thought comes first during development. My friend who apparently thinks in words obviously said language comes first. To me, that is almost ridiculous. It can be proven that babies think well before they can completely speak. Even the “goo-goo-ga-gas” can be verbalization of some sort of thought. But anyway, unlike my friend, I often grasp a concept or theory and will have trouble converting it into words. So I assume I barely actually use many words when thinking. My thoughts consist rather of imagery and sensations. Its hard to explain, but for example, when I try to remember a specific lecture room for a subject I noticed that I do so by remembering which way each lecture room faced! I remember feeling like I was facing the left etc. I also remember emotions and state of mind better. I use imagery and can imagine an object in different angles etc but cannot hold the image for long, it kind of becomes “blurry”. Surely my friend who thinks in words has some sort of imagery while thinking or he would not grimace at a mere description, and not an image, of something disgusting. I find that people’s theories of modalities are very subjective because they only report on their own – there are definitely multiple that we all use, with one that stands out.

      • stevegrand says:

        I’ve often wondered about whether people who say they don’t think in pictures just have blindsight for the pictures they do think in. Ask your friend to imagine two cups, a red one on the right and a blue one on the left. Place an imaginary spoon in the blue cup. Swap the cups. Which side is the spoon? I don’t see any reasonable way of answering that without visualizing the process. It seems to me that a large amount of carefully designed and explicit rules about cups and spoons would be required to do it linguistically, using predicate calculus, say. But it’s easy if you can just “see” the cups. It would be interesting to know if people who claim not to be visual thinkers are slower at tasks like this and what their intuitions are about how they solve them.

    • stevegrand says:

      P.S. It’s interesting in the anecdotes Galton quotes that imagery is generally described as pictorial. I guess that might be a factor in the sense that I ‘see’ vision as being much more varied than that. I said I was a visual thinker but on Galton’s scale I wouldn’t come near the top because what I see is form, motion, depth and especially interaction, much more than I see colors and distinct shapes. If I build a car engine in my mind I can see it only vaguely, but I can watch it move and tell you where the stresses are greatest and where the dirt in the oil will accumulate. So what I’m seeing isn’t very pictorial but it’s definitely visuospatial. Maybe part of the difference, then, is in what we assume the word ‘visual’ to mean? Maybe people who don’t actually see photographic images believe they don’t see anything at all because they are defining the term narrowly (as Galton seemingly did)?

  22. Mel says:

    I study both psychology and philosophy and all the attempts at explaining what the “mind” (consciousness) is, has brought me to the brink of insanity. Determinism is my new enemy. How could everything we do be responses, shaped by the environment, upbringing, genes etc.? If we have no freewill, then is there really any difference between a human or animal and a Norn for example? Your project really sounds interesting but i really wonder if artificial life would ever be able to really think or doze off into a dream…for me that’s still more of a metaphysical phenomena. Oh I know, the metaphysical is so frowned upon because it can’t be proven…yet. i really would like to see where this goes and I wish I could chip in but the exchange rate is simply crazy, though I will be buying your game when it does come out!

    Also, how could you be embarrassed with your work on Creatures?? I don’t see anyone else coming up with anything that even competes with it, even though we have such great technology now. I am so upset with how Creatures 4 is turning out! Basically all the elements are being removed except for some ability to tinker with their brains apparently. So seeing this project really gives me hope again! Pleeease include different species, complex and rich environments like with Creatures. I also love how they are generally quite quirky, compared to these new err Norns that look so dull…Really looking forward to it!

    • stevegrand says:

      Fear not, Mel! Determinism isn’t the evil as it seems and there is a Third Way, I promise! 🙂 I want to write a book about it one day, if I can muster enough brainpower. The book will be called Spirit, if that’s any guide.

      • Mel says:

        Great! Thank goodness for people like you, I wish I could master the words to spread such thoughts to others too. You better muster the energy (I prefer to call it that), determinism needs more opposition :p

  23. Mel says:

    Hi! I was just wondering how Grandroids is going and if you know how much longer it will take? 🙂 Also, I’m wanting to test some human behavioral conditioning theories on some norns so I’ll have to study the norn brain and see how similar it is to ours. Luckily my psychology studies make it a bit easier though behavioral psych is not my favorite. If you have some free time maybe you could take a look at my experiments page on my blog a give me a few hints? I’d really appreciate it!

    • stevegrand says:

      Hi Mel, sorry about the delay – been waylaid by some neuroscience problems – you know how it is…

      Great blog! Good luck with the experiments. Don’t expect norns to be TOO like real animals. Classical and maybe operant conditioning should work up to a point, but nothing cognitive, because norns don’t think, they just react. They’re basically just a brainstem! I’d be interested to see the results, either way.

      The new brain model is MUCH closer to a mammal’s brain. It’s extremely complex but assuming I can make it all work, the creatures will actually THINK, in the sense that they attempt to predict the future, they can combine sequences of actions to assemble a plan that they think will lead to a desired outcome, simulate possible narratives in their minds, have hopes, fears, worries, intentions, expectations, dreams, and so on. They still won’t be very smart, but mostly because the problems they face will be a lot more difficult – they have to learn to walk and look at things and reach out for them, instead of relying on scripts to do it, and the world is realistic, 3D and physics-based, so it’s quite challenging. There’ll be ample scope for some real psychology experiments and there are a whole bunch of hypotheses to be tested (including the question of whether they have first-person consciousness).

      Should be out by the end of the year (he says with fingers crossed behind his back!).

  24. Question! says:

    Hi, while your efforts seem primairly focus on the AI, the genetics helped to make creatures a cool experience too.

    I have a question: in creatures, the biology was an abstraction where no chemical had any propriety on its own. When a chemical had any effect, it was because the genes of the creature made it happen. While this is very convenient for evolution, it is also kinda boring.

    When a creature is affected by a toxin for example, if you want the norm to adapt to it, instead of working out some kind of way to neutralize it, you can simply remove the gene that makes the toxin a toxin and have the creature ignore the chemical. You can also make substances come out of nowhere, like a creature that never needs to breathe because it can produce oxygen out of thin air (actually, not even thin air is needed.)

    While some chemical laws independant of the creatures avoid those problems, they definitely make evolution and adaption harder (an argument for r-reproduction). So, what kind of biology do you think your game will use?

    • stevegrand says:

      That’s an excellent question and I totally agree with what you say. As it happens, I’ve already written both the genetics and the chemistry, so I can give you a definitive answer!

      Genetically, the creatures are diploid this time, so we can have classical Mendelian inheritance. There are also other nice features, such as the ability to have genuine sex chromosomes (usually XY, but other possibilities exist too), and gene switches that allow for epigenetic changes. Genes can also affect more kinds of things, like skin patterns and body proportions (up to a point).

      The chemistry is completely different, for exactly the reasons you mentioned. This time I have “proteins”, made from sequences of “amino acids” (the letters ABCDX and O). The sequence of characters determines the functionality and the energetics of the molecule. X and O cause lysis and fusion respectively, so basically a chemical such as ABoCD will act as an enzyme that converts chemicals AB and CD into ABCD. There’s a simple additional rule that then allows enzymes to make other enzymes, so ABoxD will convert AB and xD into ABxD, which itself is an enzyme that breaks down any ABD into AB and D! Also, the letters have different energy levels, so some molecules are harder to make than others, while some make good signaling chemicals and others make good energy storage chemicals.

      So I’ve solved the “creating something out of nothing” problem and also made it so that the properties of chemicals are determined solely by their (genetically determined) structure. Toxins will only be toxic if they have a structure that can genuinely disrupt metabolism or signaling. Medicines will only work if a) they have a structure that breaks down or otherwise compensates for the action of the toxin, and b) can themselves be produced by chemical reactions from available ingredients (in the lab, if not in nature). It’s a lot more complex, especially for me (!), but I think it will be much more interesting.

      I wrote the chemistry code over a year ago but haven’t had much reason to use it yet, so I don’t know how insane I’ll become as I try to build a complete physiology from it, but I THINK it will work out, and it certainly addresses all the problems you mention.

      • Question! says:

        Thanks for the very extensive reply. If someone else than you was working on this, I would doubt they could manage to manage something this ambitious, but since you already made creatures (afaik by far the most advanced game of its kind, especially for the time it was released) I’m really excited about your project.

      • Rob Lingley says:

        Interesting. I was inspired by Holland’s CAS arguments but when I tried to use his architecture I couldn’t relate. So I used Hofstadter and Mitchell’s Copycat as a framework for a chemical reactor. Copycat is usually used for small model’s of the mind but I found it great for chemistry. Adding an RNA like chemical followed quite logically. The only issue was how to provide protein folding. (www.robsstrategystudio.org/awfcpiaso.htm). How do you solve this? Rob

  25. Pingback: Consciousness is for life (not for Christmas) | Ruairí Loves

  26. Melody says:

    Hi Steve, I think that your work is really interesting, thank you for that. Please excuse my poor english, I will try to ask my question anyway! I don’t see exactly what your definition of “consciouness” is, i.e, what does it mean to have a conscious AI? I thought first that it is the capacity to distinguish ourself from our environment, to have a feeling of “self”. The next question is how do we develop this ability? I read that the same areas in our brain are involved in motor planning and in other’s intention inference, maybe because at the beginning, we don’t distinguish ourselves from the world and other people. As long as other’s actions are responding to our desires and needs (like a mother to her baby), maybe there is no reason to realize that we are not almighty and that other people are not a part of ourself. Then, maybe, consciousness is emerging from frustration! What do you think?

    • stevegrand says:

      Your English is excellent! I love the idea that consciousness emerges from frustration! I think you might be on to something there. Certainly young children don’t seem to distinguish self from other very easily. And yet at the same time we start out very egocentric and only gradually learn that other people feel things just like we do. Either way, the ability to perceive things that happen to other people almost (but not quite) as if they were happening to us is important. And it’s interesting when it goes wrong or gets too sensitive, as in touch synaesthesia and schizophrenia.

      I honestly don’t have a definition of consciousness yet. I actually think there are several kinds or levels of consciousness and people tend to mix them up, but it’s all still a mystery to me. What I’m interested in is what conditions and properties are necessary for these various kinds of consciousness to exist. I’m particularly interested in how we are able to have thoughts about things – to disconnect ourselves from the outside world and construct a narrative in our imaginations. I think I understand a lot of that now, so I’m hoping that when I’ve finally made a working example I can challenge the philosophers to tell me whether it’s conscious or not, and if not, why not!

  27. Pingback: Just an Idea: Google Human- Google Earth for the Human Body - Ted Curran.net

  28. Pingback: Case Study: Creatures – Tom Battey

Leave a comment