Mappa Psyche

I’m kind of feeling my way, here, trying to work out how to explain a lifetime of treading my own path, and the comments to yesterday’s post have shown me just how far apart we all wander in our conceptual journey through life. It’s difficult even to come to shared definitions of terms, let alone shared concepts. But such metaphors as ‘paths’ and ‘journeys’ are actually quite apt, so I thought I’d talk a little about the most important travel metaphor by far that underlies the work I’m doing: the idea of a map.

This is trivial stuff. It’s obvious. BUT, the art of philosophy is to state the blindingly obvious (or at least, after someone has actually stated it, everyone thinks “well that’s just blindingly obvious; I could have thought of that”), so don’t just assume that because it’s obvious it’s not profound!

So, imagine a map – not a road atlas but a topographical map, with contours. A map is a model of the world. It isn’t a copy of the world, because the contours don’t actually go up and down and the map isn’t made from soil and rock. It’s a representation of the world, and it’s a representation with some crucial and useful correspondences to the world.

To highlight this, think of a metro map instead, for a moment. I think the London Underground map was the first to do this. A metro map is a model of the rail network, but unlike a topographic map it corresponds to that network only in one way – stations that are connected by lines on the map are connected by rails underground. In every other respect the map is a lie. I’m not the only person to have found this out the hard way, by wanting to go from station A to station B and spending an hour travelling the Tube and changing lines, only to discover when I got back to the surface that station B was right across the street from station A! A metro map is an abstract representation of connectivity and serves its purpose very well, but it wouldn’t be much use for navigating above ground.

A topographical map corresponds to space in a much more direct way. If you walk east from where you are, you’ll end up at a point on the map that is to the right of the point representing where you started. Both kinds of map are maps, obviously, but they differ in how the world is mapped onto them. Different kinds of mapping have different uses, but the important point here is that both retain some useful information about how the world works. A map is not just a description of a place, it’s also a description of the laws of geometry (or in the case of metro maps, topology). In the physical world we know that it’s not possible to move from A to B without passing through the points in-between, and this fact is represented in topographical maps, too. Similarly, if a map’s contours suddenly become very close together, we know that in the real world we’ll find a cliff at this point, because the contours are expressing a fact about gradients.

So a map is a model of how the world actually functions, albeit at such a basic level that it might not even occur to you that you once had to learn these truths for yourself, by observation and trial-and-error. It’s not just a static representation of the world as it is, it also encodes vital truths about how one can or can’t get from one place to another.

And of course someone has to make it. Actually moving around on the earth and making observations of what you can see allows you to build a map of your experiences. “I walked around this corner and I saw a hill over there, so I shall record it on my map.” A map is a memory.

Many of the earliest maps we know of have big gaps where knowledge didn’t exist, or vague statements like “here be dragons”. And many of them are badly distorted, partly because people weren’t able to do accurate surveys, and partly because the utility of n:1 mapping hadn’t completely crystallized in people’s minds yet (in much the same way that early medieval drawings tend to show important people as larger than unimportant ones). So maps can be incomplete, inaccurate and misguided, just like memories, but they still have utility and can be further honed over time.

Okay, so a map is a description of the nature of the world. Now imagine a point or a marker on this map, representing where you are currently standing. This point represents a fact about the current state of the world. The geography is relatively fixed, but the point can move across it. Without the map, the point means nothing; without the point, the map is irrelevant. The two are deeply interrelated.

A map enables a point to represent a state. But it also describes how that state may change over time. If the point is just west of a high cliff face, you know you can’t walk east in real life. If you’re currently at the bottom-left of the map you know you aren’t going to suddenly find yourself at the top-right without having passed through a connected series of points in-between. Maps describe possible state transitions, although I’m cagey about using that term, because these are not digital state transitions, so if you’re a computery person, don’t allow your mind to leap straight to abstractions like state tables and Hidden Markov Models!

And now, here’s the blindingly obvious but really, really important fact: If a point can represent the current state of the world, then another point can represent a future state of the world; perhaps a goal state – a destination. The map then contains the information we need in order to get us from where we are to where we want to go.

Alternatively, remembering that we were once at point A and then later found ourselves at point B, enables us to draw the intervening map. If we wander around at random we can draw the map from our experiences, until we no longer have to wander at random; we know how to get from where we are to where we want to go. The map has learned.

Not only do we know how to get from where we are to where we want to go, but we also know something about where we are likely to end up next – the map permits us to make predictions. Furthermore, we can contemplate a future point on the map and consider ways to get there, or look at the direction in which we are heading and decide whether we like the look of where we’re likely to end up. Or we can mark a hazard that we want to avoid – “Uh-oh, there be dragons!”. In each case, we are using points on the map to represent a) our current state, and b) states that could exist but aren’t currently true – in other words, imaginary states. These may be states to seek, to avoid or otherwise pay attention to, or they might just be speculative states, as in “thinking about where to go on vacation”, or “looking for interesting places”, or even simply “dropping a pin in the map, blindfold.” They can also represent temporarily useful past states, such as “where I left my car.” The map then tells us how the world works in relation to our current state, and therefore how this relates functionally to one of these imagined states.

By now I imagine you can see some important correspondences – some mappings – between my metaphor and the nature of intelligence. Before you start thinking “well that’s blindingly obvious, I want my money back”, there’s a lot more to my theories than this, and you shouldn’t take the metaphor too literally. To turn this idea into a functioning brain we have to think about multiple maps; patterns and surfaces rather than points; map-to-map transformations with direct biological significance; much more abstract coordinate spaces; functional and perceptual categorization; non-physical semantics for points, such as symbols; morphs and frame intersections; neural mechanisms by which routes can be found and maps can be assembled and optimized… Turning this metaphor into a real thinking being is harder than it looks – it certainly took me by surprise! But I just wanted to give you a basic analogy for what I’m building, so that you have something to place in your own imagination. By the way, I hesitate to mention this, but analogies are maps too!

I hope this helps. I’ll probably leave it to sink in for a while, at least as far as this blog is concerned, and start to fill in the details later, ready for my backers as promised. I really should be programming!

About these ads

About stevegrand
I'm an independent AI and artificial life researcher, interested in oodles and oodles of things but especially the brain. And chocolate. I like chocolate too.

31 Responses to Mappa Psyche

  1. Karl says:

    Very thought provoking. And when we see others around us, walking with an empty coffee cup or turning into a residential street, do we not orient them within our own topological map as a means of deduction or anticipation for what they must be doing? Speaking of which, my own cup has run dry…

  2. Ben Turner says:

    I feel as though you might be using the map metaphor in two ways here:
    1) in the way that is captured by Alfred Korzybski’s phrase “The map is not the territory”, i.e., the idea that a “map” is nothing more than a model of something… typically, some part of physical space, but more generally, anything. This map should bear some resemblance to the thing it’s meant to represent, but as you point out in your two example maps, the information that’s present and the way it’s encoded can vary wildly. The analogue to imagination is straightforward: if I want to think about what I’ll have for lunch, I hardly have to build and run a model of the world that includes things like how vertical distance there is between each stair in my staircase; however, if I want to walk downstairs in the dark, that’s very good information to include in my model.
    2) Towards the end, what you were writing seemed most applicable to memory, insofar as you are treating the map as a place where you can leave yourself markers for possible future states. However, this seems like a very different map than the one I described above; it’s more akin to the captain’s log, I’d say, in that there is nothing intrinsic to the model (i.e., the map) that allows it the function of memory. What I mean is, the first meaning of a map allows you to run simulations, etc etc; however, the second function is more a matter of noting that a particular simulation led to a desirable, or undesirable, outcome, given your current goals. In this case, you don’t need to keep track of the whole simulation, just the starting parameters that led to the given outcome. Here, this log of starting parameters really has no bearing to the not-the-territory map, because it isn’t meant to. Really, it essentially comprises your various motivational states, and has nothing to do with “imagination” except to the degree that, in order to make any entries at all, you have to have the ability to run simulations using the not-the-territory map, the results of which you store in this memory-map.

    Also, since you used the phrase “possible state transitions”, I HAVE to include this passage from Anathem (pink nerve-gas-farting dragons are being used as an example of something everyone generally knows they don’t have to worry about on a daily basis):

    ‘So it is an intrinsic feature of human consciousness—this filtering ability.’ […]
    ‘What then is the criterion that the mind uses to select an infinitesimal minority of possible outcomes to worry about?’ Orolo asked. […]
    ‘It is a Hemn space—a configuration space—argument,’ I blurted, before I‘d even thought about it. […]
    ‘There‘s no way to get from the point in Hemn space where we are now, to one that includes pink nerve-gas-farting dragons, following any plausible action principle. Which is really just a technical term for there being a coherent story joining one moment to the next. If you simply throw action principles out the window, you‘re granting the world the freedom to wander anywhere in Hemn space, to any outcome, without constraint. It becomes pretty meaningless. The mind—even the sline mind—knows that there is an action principle that governs how the world evolves from one moment to the next—that restricts our world‘s path to points that tell an internally consistent story. So it focuses its worrying on outcomes that are more plausible…’

    • stevegrand says:

      Hi Ben, Love the quote!

      I’m not sure that I understand your second kind of map, but what I have in mind is largely the first kind. I think.

      The territory of the map is a model of the world – how one thing leads to another. Activity on the map then represents one of several kinds of state of that model. Both the territory AND the activity (when it is persistent) represent kinds of memory.

      I probably shouldn’t have been so cavalier with the word “memory”, but I didn’t want to get into that too far just yet. First of all, a bit more detailed information: You have to think of the cortex as a patchwork of hierarchical maps, with each one mapping whatever statistical regularities it can find, given its inputs from and outputs to other maps, and each one having both perceptual and functional properties.

      So the terrain of any one of these maps is partly a perceptual categorization of its inputs, which I think we could fairly call semantic memory. But it’s a bidirectional structure that also learns how to make use of that memory, so in that aspect it’s procedural memory.

      Then what I describe above as a point or marker can be persistent. If you decide on a goal then that goal needs to be self-reinforcing and have inertia. In the execution of that goal you sometimes need a persistent representation of sub-goals or information deliberately gathered by the senses. So that’s working memory – a self-maintaining pattern of activity on the map, instead of the topology of the map itself.

      What I don’t have a mechanism for yet is episodic memory. In real brains this seems to be persisted in the hippocampal system and then become absorbed later by neocortex, perhaps as semantic memory but with associative links that enable its reconstruction in episodic form.

      So, to summarize: Episodic memory I can’t account for yet; procedural and semantic memory are represented by the self-organised arrangement of feature detectors in the map itself (yang and yin layers respectively); working memory (intentions, attention, recently recorded observations, symbolic tokens) is represented by patterns of activity superimposed on those maps.

      Does that fuse together the two ideas? I see the map as the model of the world and the activity as the various states of that model, but all of them happen on the same real-estate. A memory of where I left my keys is persistent activity of one kind; a ‘memory’ of what I intend to do next is persistent activity of a different kind, but mapped out on the same territory. A memory of how to reach out in a given direction is encoded in the map itself – it is the territory. The territory gives the activity meaning.

      Hopefully all this will become clearer eventually!

      • Ben Turner says:

        Hi Steve – that does help. I suspect we’re talking about the same things, and simply visualizing them in different ways. Though I hate the trend in my field (cognitive neuroscience) of modularizing every conceivable function, I suspect its influence has infected me somewhat, because I imagine a specialized network/system that is responsible for the first type of map, i.e., building models and running simulations, which interfaces with a separate set of systems that are responsible for keeping track of the inputs to and outcomes of those simulations (some of which get special tags presumably because the simulation was actually carried out in the real world). You seem to be ascribing both functions to a single sort of system that carries out both in parallel, which of course they are, at the level of system=brain, and really, I am absolutely of the opinion that trying to define brain region X as carrying out function Y, but not Z, is asinine, just because the level of description (i.e., brain networks or regions) is arbitrary, and the reductio ad absurdum case of talking about neuron 19,071,156,288 carrying out some list of hundreds of processes specific to that neuron scares me away from the modular approach.

        I’m also interested in your difficulties with episodic memory. My lab focuses heavily on procedural and, somewhat less heavily, working memory, and I don’t consider myself a “memory” researcher, so I’m not that familiar with the state of the field on declarative memory. However, when I think of episodic memory, I see it as belonging to the same system as the system for creating plans… in fact, I remember seeing research that showed that similar brain areas (again, caveat emptor) were involved in planning for the future as for remembering particular past events. In essence, all you have to do is store the parameters that existed in the world at the time of the memory, and then the general-purpose modeler/simulator “creates” the memory. Again, not too familiar with the research, but surely Loftus’ findings are compatible with this sort of notion that there is nothing sacrosanct or infallible about episodic memory.

      • stevegrand says:

        That’s very interesting… Hmm… I’ll have to think about the relationship between PFC, planning and episodic memory. It kind of makes sense and kind of doesn’t, in relation to my model.

        Yes, I’m ascribing both to a single system. I’m basically suggesting that Brodmann areas, such as they are, represent the boundaries of largely self-organized but sometimes partially genetic “modules”, derived from the same basic cortical cytoarchitecture, in which geography is the primary currency. In other words, on the cortical sheet position describes meaning, while different “layers” of activity qualify that meaning for different purposes.

        So if someone is seeing a tea cup, a certain pattern of activity will develop across certain cells (in a variety of maps). If someone imagines a tea cup, much the same pattern of activity will arise on the cortical sheet, but in different cells.

        If that were not true, then imagining and perceiving a tea-cup would require learning about cups (not just their perceptual attributes but their affordances and everything else) twice. And it would leave us with no obvious role for the “imagining a tea-cup module”. Whereas if the two are superimposed on the same geography, not only does learning to recognize a tea cup automatically allow us to imagine one, it also gives a rationale for why we might want to.

        A predominantly perceptual schema like a tea cup isn’t a good example, but if you think about predominantly motor factors it makes more sense. (It makes sense for tea cups too, but less obviously). For simplicity’s sake (and because this is the way I’m doing it for my game) imagine that a region in primary motor cortex has become organized by head angle, such that a peak of activity in the center of the map means “straight ahead”, and adjacent angles are represented by adjacent positions on the map.

        If my head is *at* a given angle X, I’m hypothesizing that my motor cortex records this present state as a peak of activity at the appropriate spot on the map surface, but in a specific “layer” of cells. If I *want* it to move to angle Y, I do this by energizing the same place that would be active if my head was already at angle Y, except this time I energize the intention layer instead of the perception layer. So 2D position specifies an angle, while the layer of cells that is firing says whether that angle is the perceived state or the intended state. The task of the map is then to bring the present state into line with the intended state – i.e. the map acts as a servo.

        The geography of the map thus links both perception and intention. I think I can also integrate attention, expectation and working memory / binding. The motor map above is really sensorimotor – it drives Betz cells to control the head muscles, but it also needs proprioceptive input. Both bottom-up and top-down layers of this require feature analysis and collaborate to self-organize the map’s coordinate space. My hypothesis is that the earliest cortical maps were sensorimotor and largely autonomous, rather like many thalamic nuclei, but evolution discovered it could treat lower maps as if they were abstracted sensory systems and motor systems, and so produced a hierarchy.

        Maps are thus modules, some sensorimotor and some more abstract. But only for some of them would it be possible to say what they “do”. Many will perform several tasks that happen to require the same coordinate frame and abstraction level. Some will be in concrete coordinate frames (tonotopic, retinotopic, somatotopic) and some in abstract frames (types of facial expression, ways of grasping, ordinal numbers). Some will just represent messy intermediate coordinate frames that make it possible to morph data from one frame to another.

        The prefrontal cortex is a bit of a special case in this model. I see it as fundamentally the same as the rest, but with a greater tendency to persistent activity and a more diffuse I/O, which is basically arranged in the form of a map of the other maps. These features make it possible to form highly conditional and ad hoc plans, compared to the stylized responses of lower maps. But I’m hand-waving a lot here because I have to build the rest of cortex first!

        So that’s my hypothesis in a nutshell. Geography determines meaning, and various “overlays” on the same geography enable the brain to make use of that shared meaning in different ways.

        Of course there are other kinds of modularity too, in limbic areas and thalamic nuclei, etc. I’m not suggesting this is the whole system. I’m just suggesting that this is a generalized but self-specializing machine that supervenes over an evolutionarily older, previously autonomous mechanism. It’s very unlike a computer, in which data is fetched from memory and passed into an adder, etc. I think re-routing of signals like that is possible and necessary, but the “modules” involved are all made of the same stuff configured in different ways.

  3. Dranorter says:

    Have fun coding! As I’m sure you realize, the public relations side of things will take up all your time if you let it, because of those of us who comment on everything you write. :)

    That said, I must ask; does this mean your brain model will be capable of metaphors/analogies? I hope your game world will be complicated enough to be a challenge to such a capable mind!

    • stevegrand says:

      You’re right there! I’ve done nothing but answer messages and email since the kickstarter thing began! But it’s nice to have these conversations. Tomorrow I will steel myself and get some work done!

      I hope the world will be complex enough too – I’m working on that but I think I can hit the right balance. As for metaphors and analogies, I look forward to finding out! At the moment I don’t see how that’s going to happen, and it bugs me. But maybe when I get further into it I’ll start to get a sense of it. I’d very much like it to be possible, but right now I can’t visualize the system in enough detail to know whether analogical reasoning is going to happen spontaneously or be something I can cater for. Exciting, innit? ;-)

  4. Bindy says:

    Steve,

    We are all obsessed with Quake maps down here and plotting our own destinies upon them. http://quake.crowe.co.nz/

    But for more contemplative times….

    See your very own Hand Drawn Maps

    http://www.handmaps.org/

    Mapping Controversies

    http://www.mappingcontroversies.net/

    • stevegrand says:

      Ha! I’m sure you are, Bindy! Were you near the quake? I meant to ask but then it completely slipped my mind. Sorry – been rather distracted by all this fund-raising.

      Thanks for the links! The controversy mapping stuff looks very interesting – an entire subject I’d never even heard of! I’ll pass that on to my first wife, whose PhD is on open science and public engagement.

      Hope you’re well and everything is standing still around you.

  5. Ben Turner says:

    Well, we’re barred from continuing that thread (maybe for the best?), but thanks for the response. I definitely have a better sense now for the full meaning of the map metaphor. I’ll have to mull over everything you said, but at first blush, I like it all a lot. Of course, I also feel guilty that you’re taking so much of your time responding to my little niggles… I’ve already bumped up my kickstarter pledge once, but I might have to do so again ;-)

  6. Eric Collins says:

    I’d like to add one more kind of map to your considerations, if I may. When I consider the pattern recognition capabilities of the brain, I often think of neurons firing in something more like constellations in response to certain stimuli and/or concepts. The use of the constellation map could conceivably allow for multiple concepts to occupy the same cortical real-estate by means of a super-position type principle. The “image” produced from the combined constellation map might itself be considered as a point on one of your state maps, assuming that your state map exists in some hypothetical space with very large number of independent dimensions.

    The reason I prefer the constellation map to the state space map is that the state space model may be less robust in the face of minor perturbations to the input parameters (i.e. the currently perceived or imagined state). That is to say that no two moments are ever exactly alike. Trying to find your place on (or a path through) the state map that is insensitive to common variations in circumstances may end up being a very difficult nut to crack. The ground continues to shift under your feet, so to speak.

    With the constellation type approach, one could conceivably navigate the state space by learning to recognize that certain key patterns are present in the perception/imagination input space and then remembering that taking certain actions will likely have a high probability (based on past experience) of bringing other familiar patterns into the perception/imagination space. Thus, it is the sequence of patterns generated in the brain, and how they vary over time due to natural phenomena and personal intervention, that are the key to understanding behavior (both intelligent and otherwise).

    I’m sure I’m not the first to think of it this way. My thinking has been heavily influenced by your work as well as that of Jeff Hawkins and a number of other sources that I can’t call to mind at the moment. One place I have found that consistently presents interesting tidbits of cognitive science in the guise of a very enjoyable radio program is RadioLab . A recent episode called ‘Lost and Found’ has some interesting discussion on our sense of place and space and how we manage to get around without getting lost. Another program which you might find interesting is called ‘Words’, which concerns how our use of language affects how we think and even what we are capable of thinking.

    Thank you for your continued efforts in this area. I remember purchasing the original Creatures back in 1997 and enjoying it very much. To this day, my children still like to play all four versions on a fairly regular basis (C1, C2, C3/DS, CA). I eagerly look forward to your next creation.

    • stevegrand says:

      Thanks, Eric. I’ll have to think about that. I have some particular requirements for these maps because they’re just a facet of an integrated system, but the way I’m thinking about them now might be more brittle than I’d like. On the other hand, what I’m doing at the neural level might actually be fairly close to what you suggest. I’ve been away from it for a while so I’ll have to get my head round it again, but I’ll keep in mind what you say.

    • Ben Turner says:

      It seems like functionally, a constellation map could easily be represented in a “state-space”y way while still being flexible in terms of the exact inputs. In fact, you may be thinking in exactly the way I think of it, which is typically in a very high-dimensional space where every cluster of neurons forming a computational unit gets its own dimension, so every possible brain state (along with a number of unpossible ones) can be represented as a point in this high-dimensional space (where the location on each dimension corresponds to some instantaneous measure of that cluster’s “activity”). This will reduce your “constellation” to a single point, but similar constellations will lie close together in this space, which corresponds to the sort of robustness you mention. Anyhow, assuming Steve is using probabilistic, broadly-tuned neurons, and coding variously more complicated combinations of features in some sort of hierarchy, it seems likely that this sort of constellation coding will fall out naturally. Likewise, connections between constellations will simply fall out of whatever plasticity mechanisms he gives his brain (I’d vote for at least two learning rules, one of which is relatively insensitive to feedback and the other of which depends critically on it, but hey, it’s not my brain…)

      • Eric Collins says:

        Broadly speaking, you can almost always map the activity level of network of N discrete neurons onto an N-dimensional hyper space. The current state of the network is then just a vector in this space. Unfortunately, this does not really tell you much more than just looking at the network itself. It does give you access to certain metrics, such as distance between two distinct states, but this seldom adds much additional meaning other than to compare one particular state directly to another in the most trivial manner. There’s no generalization of knowledge about the pattern in this sort of analysis.

        With old-school strong AI, one would attempt to identify certain aspects of the input space (i.e. small, red, round), and then generalize this to higher level abstractions (i.e. apple), which can then be reasoned about (i.e. apple is food) and applied to goal directed behavior (i.e. when hungry->eat food->eat apple). This has the advantage of being fairly straight-forward to code, but rather brittle in its application (i.e. what do you do with a small, red, rubber ball?)

        What is needed is a way to automatically generate categories based on experience. These categories start out rather broad, but are gradually refined as more input is gathered. One might start out thinking that all small round red objects are something that can be eaten, but as soon as the first object which cannot be eaten is discovered, there must be a mechanism in place which would allow one to recognize the differences between this inedible object and all of the previous edible objects and form separate categories for them.

        I have some ideas on how to accomplish this, but perhaps I should get my own blog rather than clogging up the comments here.

      • Erin says:

        > what do you do with a small, red, rubber ball?

        If apples were the only things you’d ever had experience with in the “small, red, round” categorization, you probably would go right ahead and try to bite the rubber ball. You’d of course quickly find out that “rubber” needs to be added as a new category and that “apple” would need to be further limited to the “not rubber” category.

        On the other hand if you next ran into something that was “small, green, round” and tried to bite it, you may well discover that it also falls into the “apple” category and that your apple category was previously too limited.

        And when you come to “small, orange, round” and bite it, you discover that it IS edible, but that apples and oranges are different on other levels (taste, texture, etc).

        So we’d have a category map something like:
        small,red,round,!rubber -> apple
        small,red,round,rubber -> ball
        small,green,round -> apple
        small,orange,round -> orange

        The real trick is when you then run across a small green rubber ball. You’d want to combine the attribute list for “small,green,round” with the texture of the rubber and figure out that “small,green,round” can be split into the apple/ball categories just like “small,red,round” was, but we wouldn’t need to go as far as biting it this time — that is, the “rubber” attribute is in some sense more powerful than the “small,green,round” attributes and we somehow know that rubber is not food no matter what shape or color it comes in — at least once we’ve had a couple of experiences with rubber.

        And then we can discuss something like plastic apples — very well crafted ones can look (and often feel) identical to real apples. And yet no matter how many times we fail at biting a plastic apple, we certainly don’t start assuming that ALL apples are plastic (though we’ll start assuming all apples from that one particular display are plastic — so there’s a localization aspect in there as well). On the other hand, some of the waxier varieties of apples can start looking plastic (and thus inedible) even when they’re sitting on the grocery store display and you logically know they’re real apples.

      • Ben Turner says:

        Well, Eric, let me know if you start your own blog so I can come clog up the comments there! I have some thoughts re: apples and such (I’m generally skeptical of the strong AI approach, or even workarounds of it that still end up looking like classical theories of categorization), but I’m too tired to make any coherent arguments right now, and besides, I don’t want to muck up Steve’s pretty blog with some tangential nerd battle about the nature of categorization =)

  7. Gerjan says:

    Hey Steve,

    First of all; I love reading your blogs and I’m very happy your funding is going so well! And: I just found the following on NewScientist. Maybe it’s of interest to you.

    http://www.newscientist.com/blogs/shortsharpscience/2011/03/mapping-brain-cell-connections.html

    But what I’d really like to ask you is; What is the level of intelligence that you hope your Grandroids will be able to achieve? What are you aiming for? I can tell from your blogs that you’re using a lot of ideas that you also used for Lucy, but since there will be more than one Grandroid, you’ll have to divide system resources between them and an individual Grandroid will, consequentially, have less potential than Lucy did. Can you tell us anything about that?

  8. thezeus18 says:

    Hello Sir Grand. You might be interested in this community of singularitarians and their blog sequence on the map not being the territory. Watch out though, they really don’t like emergence.
    And if you tell them that you’re trying to develop an artificial intelligence intuitively (for lack of a better word), they might think you’re going to bring about the end of the world.

    This isn’t blogspam I just like links.

    • stevegrand says:

      > This isn’t blogspam I just like links.

      Heh! I expect you can chain them together to help you get out of mazes.

      Is “community of singularitarians” an oxymoron? Oh, it’s Eliezer! I *half* agree with him about emergence. “Emergent” is a pretty useless term, but “emergent phenomenon” is a very valuable one, imho. Few people grasp this, least of all singularitarians.

      Thanks, I’ll peruse it with interest.

  9. Darian Smith says:

    “So if someone is seeing a tea cup, a certain pattern of activity will develop across certain cells (in a variety of maps). If someone imagines a tea cup, much the same pattern of activity will arise on the cortical sheet, but in different cells.”

    I believe that in the imagining state a subset of the cells normally active when exposed to a real tea cup become active, and that explains why you get similar activity in similar areas but at a weaker level. Such might also vary from person to person and could explain why some individuals claim to have such a vivid mental image that it is subjectively indistinguishable from the real thing.

    I was wondering if you’ve come across the ” integrated information theory” of consciousness and what’re your thoughts about it(the free sciam article does a good job of conveying the theory, imo). It approaches one of the feats I’ve found most fascinating and most dramatic the brain pulls of, the fact it has such a gargantuan number of meaningfully distinct states, and is able to actually perform meaningful highly detailed distinctions between such… impressively the brain can land in any such state and generate meaningful conscious sensation in less than a small fraction of a second…. yet the number of possible states(visual, auditory, olfactory, tactile distinct and multi-sense-combinatorial conscious possibilities) seems ridiculously big, it seems even bigger than the number of connections.

    Linguists have shown that similar combinatorial explosions can occur in language alone(e.g. the number of comprehensible sentences by a single individual being ridiculously high, possibly higher than the number of atoms in the brain, not to mention the possibilities when multiple languages are learned).(http://clas.mq.edu.au/infinite_sentences/index.html, what’s the code for url embedding in wordpress?).

    • stevegrand says:

      Yes, although what (very) little I know of IIT doesn’t really seem like it gets us very far. It’s sort of self-evident that consciousness consists of large amounts of integrated information, yet at the same time a party is a room full of people making a lot of noise, but not all rooms full of noisy people are parties. If you see what I mean. There are plenty of parts of my brain that integrate a great deal of information and yet I’m often or always unconscious of them (although for all I know, someone else in my head IS conscious of them). And although my consciousness of the outside world always results from the integration of many streams of information, I can be moderately conscious of my inner daydreams without the bulk of this low-level information being involved. I’m not sure that quantitative theories really get us anywhere and the answer needs to be qualitative, but I don’t really understand IIT well enough to comment.

      The combinatorics are certainly mind-boggling. I worked out the approximate maximum number of states of the brain recently and it’s an absurdly large number. Of course the number of possible states of the observable universe is an even more absurd number, so it’s kind of lucky that a) only an infinitesimal variety of these states ever occurs to a single individual, so brains only need to learn a tiny subset, and b) there’s a smoothness to the arrangement of states that the brain is able to capitalize on in order to achieve a sparse representation.

      At the moment I’m experimenting with one of the key learning rules for my new creatures’ brains and it involves a pretty convolved, holographic representation, in which every neuron in a region contributes to every ‘memory’. I quite surprised myself by how sparse it can afford to be. In the test example I’m playing with, a full representation of the very simple state space would require more than 130,000 connections, but in practice it only starts to degrade badly when the number of connections drops below 100. This happens to be an extremely smooth state space, so it’s hard to extrapolate, but I think it probably shows that the brain uses some neat tricks.

      What you say about imagery involving a subset of the SAME cells as perception is interesting because it maps onto a discussion I was having yesterday. I think I’ve been wrong in some of my assumptions and might now have a way of doing things that matches your observation. The distinction between imagery and sensation would not be in kind but would result from a separation of a single recursive information flow into two separate or partially separate ones. But I haven’t really got my head around it yet, so I can’t explain it very well. Watch this space, as they say! Thanks for the interesting comments :-)

      • Guest says:

        “although for all I know, someone else in my head IS conscious of them”

        This is really funny and quite interesting. Presumably, if there really is more than one conscious self in the brain, they would be aware of each other (in the same way that we are aware of other people, even if we don’t know what’s going on INSIDE their heads). On the other hand, I very often debate things with myself so why not? (And there it is again!)

      • stevegrand says:

        Heh! Google “split-brain patient” and you’ll see it’s not as far fetched as it seems. Separate the two halves of the brain and you end up with two people who don’t necessarily even have the same tastes and opinions. Of course only one of them can speak, so it’s hard to get in touch with the other, but the other can sometimes draw. I debate things with myself too, and it’s kind of important that each half of the debate has a different slant on things or they’d just agree with each other. Maybe intellectually dogmatic or superficial people just don’t have split personalities who hate each other enough!?! :-)

      • Darian Smith says:

        Regarding split brain patients, iirc, there was one case where one side believed in god and the other did not. Ramachandran rightfully asked what happens then, does one go to heaven and the other not?

        “only an infinitesimal variety of these states ever occurs to a single individual, so brains only need to learn a tiny subset”

        While it is true that this helps with regards to longterm memory capacity, there’s still the issue of short-term/working memory, that is the ability to in fractions of a second generate a consciously distinct experience when exposed to any possible combination of color, shape, sound, touch, odor, taste no matter what it may be. while it is true that it will only experience a fraction of the possible in a lifetime, it appears able to respond to any conceivable combination without trouble in mere moments. This is one of the aspects of the binding problem, besides how the mechanism takes place, how it avoids the combinatorial explosion while preserving the ability to represent and discriminate from an insane number of possibilities at any moment without notice.

      • stevegrand says:

        > there’s still the issue of short-term/working memory

        That’s a good point, although I’m not sure it’s true. We can discriminate in working memory between multiple states involving things we already know about, but wouldn’t it be true that the vast bulk of the information involved is actually in LTM? It’s not like WM is recording pure sensory data – more like forming temporary associations between existing percepts and concepts at a more or less abstract level. These percepts themselves are formed from complex associations in LTM. To take a linguistic example, we only have to remember the abstract word “key” and an abstract location in space if we want to remember to pick up our keys. We don’t have to remember the myriad associations each concept projects to – what keys look like or are used for, or how to recognize this location. These are already present in LTM. For most of us, if we’re asked to remember a photo and then later draw what we saw, we do so by reconstructing the ‘symbols’ in the drawing, not a raw bitmap.

        Having said that, though, autistic savants can have incredibly prodigious memories for raw percepts, so I’m definitely not knocking the storage capacity, nor its ability to discriminate! My present model definitely can’t compete.

  10. Darian Smith says:

    “For most of us, if we’re asked to remember a photo and then later draw what we saw, we do so by reconstructing the ‘symbols’ in the drawing, not a raw bitmap.

    Having said that, though, autistic savants can have incredibly prodigious memories for raw percepts, so I’m definitely not knocking the storage capacity, nor its ability to discriminate! My present model definitely can’t compete.”

    It is true that few are able to recall in such fine detail even after short moments, but when faced with the new stimulus it does seem as if at each moment we’re presented with a reach highly detailed array of data, albeit an ephemeral one that usually fades quickly.

    Was researching into this recently and came upon this nice paper relating physiological data to a possible explanation

    http://www.google.com.pr/url?sa=t&rct=j&q=Binding+hardwired+versus+on-demand+feature+conjunctions&source=web&cd=2&ved=0CCsQFjAB&url=http%3A%2F%2Fwww.cerco.ups-tlse.fr%2F~rufin%2FOriginalPapers%2FVanRullen-Cognition2008.pdf&ei=a5s0T8iRDcjbggf4iIDoBQ&usg=AFQjCNHfspMNr6230saswjjg9VIcyLmYHg&cad=rja

    IF the preceding is true, 2 different binding methods available, it would explain how the practically infinite possibilities are handled and the limitations that some times requires serial comparison of features*(e.g. Individuals can generally automatically tell the difference even between closely resembling faces of their own race… yet it is said that individuals not exposed heavily to multiracial settings, have trouble distinguishing one face from another in other races, they all look alike. Despite this there is a slightly different conscious-perceptual response to each face-stimulus, and if given two photos of two individuals one can actually detect the differences by serial scanning of features.).

  11. Darian Smith says:

    Hmmm I recently saw an interesting documentary called The Colors of infinity. It talks about what are called mathematical monsters, and the problem of generating the map of a coastline and its dependence of measuring scale.

    The infamous problem of incommessurability(coastline size no measuring stick different sticks give different values, diagonal of a square of 1, circumference and diameter).

    In colors of infinity it is shown that the problem of incommesurability is ENTIRELY done away and a simple discrete procedure can actually generate direct 1-to-1 mappings between the diagonal and equivalent points in the diagonal, likewise for the circle. IT generates mathematically equivalent circles, but with strange shapes that the mathematicians disliked so much they called them mathematical monsters and ignored this wonderful new way of relating information and continuous change through finite steps and finite definitions.

  12. Steve I saw the following video from a ps3 product
    Quantic dream’s kara

    And would suggest either your corporation or mine strive to develop such in the real world in the coming decades.

    Collaboration is also possible as I’m not interested in profits but results

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 684 other followers

%d bloggers like this: