Brainstorm 4 – squishing hyperspace

Ok, back to work. I wanted to expand on what I was saying about the cortex as a map of the state of the world, before I get onto the topic of associations.

Imagine the brain as a ten-million-dimensional hypercube. Got that?

Hmm, maybe I should backtrack a bit. Let’s suppose that the brain has a total of ten million sensory inputs and motor outputs (each one being a nerve fiber coming in from the skin, the retina, the ear, etc., or going out to a muscle or gland). For sake of argument (and I appreciate the dangers in this over-simplification), imagine that each nerve signal can have one of 16 amplitudes. Every single possible experience that a human being is capable of having is therefore representable as a point in a ten-million-dimensional graph, and since we have only 16 points per axis we need only 16 raised to the power of ten million points to represent everything that can happen to us (including all the things we could possibly do to the world, although we probably need to factor in another few quadrillion points to account for our internal thoughts and feelings).

(If you’re not used to this concept of phase space, imagine that the brain has only two inputs and one output. A three-dimensional graph would therefore be enough to represent every possible combination of those values: the value of input 1 is a distance along the X-axis, input 2 is along the Y-axis and the output value is along the Z-axis. Where these three lines meet is the point that represents this unique state. A change of state is represented by an arrow connecting two points. Everything that can happen to that simplified brain – every experience and thought and reaction it is capable of – can be described by points, lines and surfaces within that space. It’s a powerful way to think about many kinds of system, not just brains. OK, so now just expand that model and imagine it in 10,000,000-dimensional space and you’re in business!)

Er, so that’s quite a big number. If each point were represented by an atom, the entire universe would get completely lost in some small dark corner of this space and never be seen again. Luckily for us, no single human being ever actually experiences more than an infinitesimal fraction of it. When did you last stand on one foot, scratching your left ear, looking at a big red stripe surrounded by green sparkles, whistling the first bar of the Hallelujah Chorus? Not lately, I’m guessing. So we only need to represent those states we actually experience, and then only if they turn out to be useful in some way. Of course we don’t immediately know whether they’re going to turn out useful, so we need a way to represent them as soon as we experience them and then forget them again if they turn out to be irrelevant.

Thus far, this is the line of thinking that I used when I designed the Creatures brains. Inside norns, neurons wire themselves up to represent short permutations of input patterns as they’re experienced, and then connect to other neurons representing possible output patterns. Pairs of neurons equate to points in the n-dimensional space of a norn’s brain, but only a small fraction of that possible space needs to be represented in one creature’s lifetime. These representations fade out unless they get reinforced by punishment or reward chemicals, and the neural network learns to associate certain input patterns with the most appropriate output signal. All these experiences compete with each other for the right to be represented, such that only the most relevant remain and old memories are wiped out if more space is needed. There’s also an implicit hierarchy in the representations (due to the existence of simpler permutations) that allows the norns to generalize – they have a hunch about how to react to new situations, based on previous similar ones.

There’s a great deal more complexity to the Norns’ brains than this and I managed to solve some quite interesting problems. I’m not sure that anyone else has designed such a comprehensive artificial brain and actually made it work, either before or in the 18 years since. But nevertheless, basically this design was a pile of crap. For one thing, there was no order to this space. Point 1,2,3 wasn’t close to point 1,2,4 in the phase space – the points were just in a list, essentially, and there was no geometry to the space. The creatures’ brains were capable of limited generalization because of the hierarchy (too long a story for now) but I really wanted generalization to fall out of the spatial relationships: If you don’t know what to do in response to situation x,y,z, try stimulating the neighboring points, because they represent qualitatively similar situations and you may already have learned how best to react to them. The sum of these “recommendations” is a good bet for how to react to this novel situation. Sometimes this won’t be true, in fact, and that requires the brain to draw boundaries between things that are similar and yet require different responses (a toy alligator is very similar to a real one, and yet…). This is called categorization (and comes in two flavors: perceptual and functional – my son Chris did his PhD on functional categorization). Anyway, basically, we need the n-dimensional phase space to be collapsed down (or projected) into two dimensions (assuming the neural network is a flat sheet), such that representations of similar situations end up lying near to each other.

(At this point, some of you may be astute enough to ask: why collapse n dimensions down to two at all? The human cortex is a flat sheet, so biology has little choice, but we can represent any number of dimensions in a computer with as much ease as two. This is true, but only in principle. In practice, computers are nowhere near big enough to hold a massively multi-dimensional array of 16 elements per dimension (say we only need a mere one hundred dimensions – that’s already 2×10^111 gigabytes!), so we have to find some scheme for collapsing the space while retaining some useful spatial relationships. It could be a list, but why not a 2D surface, since that’s roughly what the brain uses and hence we can look for hints from biology?)

There is no way to do this by simple math alone, because to represent even three dimensions on a two-dimensional surface, the third dimension needs to be broken up into patches and some contiguity will be lost. For instance, imagine a square made from 16×16 smaller squares, each of which is made from 16 stripes. This flattens a 16x16x16 cube into two dimensions. But although point 1,1,2 is close to point 1,1,3 (they’re on neighboring stripes), it’s not close to point 1,2,2, because other stripes get in the way. You can bring these closer together by dividing the space up in a different way, but that just pushes other close neighbors apart instead. Which is the best arrangement as far as categorization and generalization are concerned? One arrangement might work best in some circumstances but not others. When you try to project a 16x16x16x16x16x16x16-point hypercube into two dimensions this becomes a nightmare.

The real brain clearly tries its best to deal with this problem by self-organizing how it squishes 10,000,000 dimensions into two. You can see this in primary visual cortex, where the 2D cortical map is roughly divided up retinotopically (i.e. matching the two-dimensional structure of the retina, and hence the visual scene). But within this representation there are whorls (not stripes, although stripes are found elsewhere) in which a third and fourth dimension (edge-orientation and direction of motion) are represented. Orientation is itself a collapsing down of two spatial dimensions – simply recording the angle of a line instead of the set of points that make it up (that’s partly what a neuron does – it describes a spatial pattern of inputs by a single point). Here we see one of the many clever tricks that the brain uses: The visual world (at least as far as the change-sensitive nature of neurons is concerned) is made up of line segments. Statistically, these are more common than other arbitrary patterns of dots. So visual cortex becomes tuned to recognize only these patterns and ignore all the others (at least in this region – it probably represents textures, etc. elsewhere). The brain is thus trying its best, not only to learn the statistical properties and salience of those relatively few points its owner actually visits in the ten-million-dimensional world of experience, but also to represent them in a spatial arrangement that best categorizes and associates them. It does this largely so that we don’t have to learn something all over again, just because the situation is slightly different from last time.

So, finding the best mechanism for projecting n-dimensional space into two or three dimensions, based on the statistics and salience of stimuli, is part of the challenge of designing an artificial brain. That much I think I can do, up to a point, although I won’t trouble you with how, right now.

I will just mention in passing that there’s a dangerous assumption that we should be aware of. The state space of the brain is discrete, because information arrives and leaves via a discrete number of nerve fibers. The medium for representing this state space is also discrete – a hundred billion neurons. HOWEVER, this doesn’t mean the representation itself is discrete. I suspect the real brain is so densely wired that it approximates a continuous medium, and this is important for a whole host of things. It’s probably very wrong to implicitly equate one neuron with one point in the space or one input pattern. Probably the information in the brain is stored holistically, and each neuron makes a contribution to multiple representations, while each representation is smeared across many (maybe very many) neurons. How much I need to, or can afford to, take account of this for such a pragmatic design remains to be seen. It may be an interesting distraction or it may be critical.

Anyway, besides this business of how best to represent the state space of experience, there are other major requirements I need to think about. In Creatures, the norns were reactive – they learned how best to respond to a variety of situations, and when those situations arose in future, this alone would trigger a response. They were thus stimulus-response systems. Yeuch! Nasssty, nassty behaviourist claptrap! Insects might (and only might) work like that, but humans certainly don’t (except in the more ancient parts of our brains). Probably no mammals do, nor birds. We THINK. We have internal states that change over time, even in the absence of external changes. Our thoughts are capable of linking things up in real-time, to create routes and plans and other goal-directed processes. Our “reactions” are really pre-actions – we don’t respond to what’s just happened but to what we believe is about to happen. We can disengage from the world and speculate, hope, fear, create, invent. How the hell do we do this?

Well, the next step up, after self-organizing our representations, is to form associations between them. After that comes dynamics – using these associations to build plans and speculations and to simulate the world around us inside the virtual world of our minds. This post has merely been a prelude to thinking about how we might form associations, how these relate to the underlying representations, what these associations need to be used for, and how we might get some kind of dynamical system out of this, instead of just a reactive one. I just wanted to introduce the notion of state space for those who aren’t used to it, and talk a little about collapsing n-dimensional space into fewer dimensions whilst maximizing utility. Up until now I’ve just been bringing you up to speed. From my next post onward I’ll be feeling my own way forward. Or maybe just clutching at straws…

Advertisements

About stevegrand
I'm an independent AI and artificial life researcher, interested in oodles and oodles of things but especially the brain. And chocolate. I like chocolate too.

20 Responses to Brainstorm 4 – squishing hyperspace

  1. Terren says:

    Regarding holistic encodings… A guy named Paul Pietsch wrote a book called Shufflebrain that describes his experiments with salamanders, in which he literally cut up and shuffled their brains without significant loss of function. He took this as evidence for the holonomic theory of Karl Pribram and popularized by Bohm.

    Pragmatically speaking, it seems like it would be much more efficient to encode things holistically. Is it critical for your sim? Probably, considering the performance requirements. The more you can do with less neurons…

    • stevegrand says:

      Ah yes, I met Karl at a conference once and we got on, but I think his holonomic theory is rather too literally like a hologram. Myself I favor something analogous but different.

      The slight snag is that I don’t know how to do it! I keep getting glimpses of a holographic principle that might work, and I’ve been doing some simulating to test my friend Dick Gordon’s theory that visual cortex uses a kind of deconvolution akin to computed tomography, in the hope that it will give me some ideas, but so far I haven’t had a breakthrough. Maybe it’ll come to me as I work out a bit more of the architecture. It would certainly be a coup!

  2. Jason Holm says:

    Four computer concepts are rummaging around in my head after reading this. I would think there must be some way to combine them to solve the problem, but I don’t know where to begin to connect them:

    1. Arrays: I took C++ but since I don’t use it daily, it gets a little fuzzy. Scouring the net, I’m recalling things about dynamic array allocation, vectors, and all that. The idea being that you start with a small array, then if you need more, you make a new, bigger array, copy the old array contents into it, and delete the old array. I’m sure you already know this, I’m just explaining so you know where my mind is going, that’s all.

    2. Defragging Hard Drives: As I understand it, one file (ore all the files for one app, I forget how it works) on a hard drive might be spread across multiple sectors and all that. Rather than stored in one location, it just finds the next available hole, fills it with the next part of the data its saving, then links them all together. The more broken up a file is, the slower it takes to process it, hence why you defrag a hard drive — putting all the parts back together to speed it up by removing all the “links”.

    3. JPEG Compression: The idea behind jpg compressing is that there are large spaces of the image which is the same thing (a white background, for instance) so rather than saying “pixel one is white, pixel 2 is white, pixel three is white…” it says “the next 15,000 pixels are white. got it? now skip ahead to 15,001…”. Rather than defining each pixel’s data, it just “skips over” the “empty” parts, saving storage space. I don’t know if it does this one-dimensionally (the next 15,000 pixels) or two-dimensionally (once you are on Y-line 230, every time you get to a pixel with an x of 150, make the next 300 pixels white. Keep doing this until you reach Y-line 400, meaning you now have a white square from 150,230 to 450,400).

    4. Text Adventure Games: You are in a room. Visible Exits here: North. South. East. West. Northwest. Southwest. Northeast. Southeast. Up. Down. UpNorth. DownSouthWest. In. Out. Between (Pern). Dennis (Homestar). Just in traditional Euclidian space, a cube could have 24 dimensions if you counted every side, edge AND corner as separate dimension. Text games didn’t really have a 3D map, it was just a 1D list of rooms with hyperlinks like a wiki, maybe similar to how a fragmented hard drive works?

    My point with all this is, couldn’t there be a way to simulate that very ten-million-dimensional hypercube — both in storage AND in mappable space — all with current technology? Especially if most of that space is empty? It just seems like there should be a way to store only the elements that are full, store only the nearby connections that already exist, and create new elements and connects as they are needed, putting them wherever there is empty space in the virtual memory of the app?

    I’ll admit, this might be a SLOW process, especially every time it has to check “are there any filled neurons near me?” but it doesn’t seem like you have to prep the whole brain before it is used, unlike a biological system. Maybe I don’t know enough about computer science but it seems like all those disjointed things have addressed one part or another, so combining them might solve the problem.

    • stevegrand says:

      Hi Jason,

      Yes, there are many ways to handle sparse arrays. As an extreme case you can simply store a list of the entries, tagged by their locations in the space. Admittedly this means tagging each value with ten million indices, so you still wouldn’t fit them on a PC. Also, it would, as you say, be slow – perhaps many orders of magnitude too slow, but there’s nothing difficult in principle about it and you’re right.

      BUT, the issue here is not technical but conceptual and philosophical. I only mentioned the storage problem to emphasize that a ten-million-dimensional array is not just big but way, way, way bigger than our imagination tends to make us think. Perhaps it was misleading to do that. Much more to the point is that I have very strong reasons for following the biology – something I suspect you’ll empathize with.

      You said you’d never heard of AI Noveau, so maybe the schisms and utter dismal failures of AI aren’t something you’re familiar with. It’s a LONG topic and I don’t have space here, but AI has failed largely because of the notion of computational functionalism. This states that anything that can be computed by a brain can be computed by a digital computer, which is true, within certain limits. But it immediately became dogma – the implied argument was along the lines of, “So we don’t know how the brain works. Who cares about brains anyway? If a brain can compute it then a program can. So lets just write a program and ignore all that messy biology. Let’s develop an algorithm for thought”.

      But this is nonsense on several levels; primarily the represenational level. Yes, there may be dozens of algorithms that can think like a human and yet bear no discernable similarity to how a brain works. Maybe. But say there are a million such solutions (which I rather suspect is almost a million times too optimistic). In the space of all possible algorithms, WHERE are these lucky million? There are millions of millions of millions of millions of millions of millions of millions of possible moderately-sized algorithms. Finding a needle in a haystack is utterly trivial compared to finding an algorithm of thought when you don’t know where to look.

      There’s only one class of machine that we KNOW can think, and that’s the brain. Doesn’t it make sense to search the space of all possible computer programs around the area where simulations of the brain occur? We probably don’t need a detailed, faithful simulation of every receptor and gene, but we can be pretty sure that a close abstraction is more likely to work than an algorithm of a type picked at random from an almost infinite selection.

      And it’s a slippery slope. If I did as you suggest and ignored the “collapse of phase space” hints provided by brains I’d come up with a computational method for storing these data points. But then what? What do I do with them? How do I make them think? I can no longer take any inspiration from neurology, because my internal representation has no similarity to that used by brains.

      To my mind, AI has no hope of solving the problem it actually set out to solve – the creation of general intelligence, as opposed to the puny little sub-problems it has made some progress on – unless it is biologically inspired. If people disbelieve me, then I’m game for a race – we’ll see who gets there first. They can make apparent rapid progress at first, but I have reasons to believe they’re heading for a brick wall.

      Also, I’m trying to write a game containing believable creatures that people can empathize with. If I base these creatures firmly on biology then I can be reasonably sure that their behavior, quirks and pathology will have a biologically plausible feel. Conventional game AI is so wooden and stilted because programmers think they can just emulate the outward behavior of creatures. You can’t. It’s like the difference between a portrait of a person and the person himself. The behavior needs to be emergent, and the only way you’ll get emergent behavior that’s biological in character, is if it emerges from a simulation that’s biological in structure.

      So I’m very wary of computational representations that abstract too far away from neuroscience. I do have to simplify things to make this work, but if I did as you suggest then I’d have walked too far from the trail to be able to find my way.

      Does that make sense? You raise an important point about representation, which I’ve tried to discuss elsewhere but it’s quite an involved thing to explain, especially if you’re not familiar with the history of artificial intelligence.

      • stevegrand says:

        P.S. Someone may object, saying “but surely if it’s hard to figure out the algorithm of thought from scratch, then it would be just as hard to pick any other algorithms – a word processor algorithm, say – from the same vast space? Since we CAN write word processors from first principles, without having to copy the mechanism of a typewriter to do it, then surely we can do the same with intelligence?”

        But here’s the challenge. To design a word processor, or any other conventional program, you have to be able to specify its behavior (although in some special cases you don’t care about it’s behavior – you just want to see what it does). There are formal languages for describing the behavior of computer programs – UML being the best-known example. It is possible to write a UML specification for precisely how a word processor should behave under all circumstances (people don’t usually do this with word processors, but they do for mission-critical applications). Once the behavior is known, it’s then possible to deduce and prove the algorithm.

        Ok, so write me a UML specification for how a human being will behave under all possible circumstances…

  3. Jason Holm says:

    So… is the solution (maybe not for this game specifically, but AI in general) to design a whole new system of hardware and science to back it up? Is the concept of binary machines the problem? If memristors isn’t the key, could it be qbits? Will believable AI ever be possible with existing technology and computer science, or do we need to start from scratch from the ground up? Is there any technology that was once experimented with, decided that it wasn’t as “efficient” at computer needs, and abandoned… but maybe should be dug up and reexamined for the problems of AI? I’m talking like, back to Babbage — was there anything since then that is useless to computers but useful to AI?

    • Terren says:

      Hey Jason, I don’t think the problem lies with computational functionalism per se. The paradigm that Steve is destroying here is the one in which behavior is explicitly defined for some range of circumstances (e.g. chess-playing software). I believe that paradigm resulted from a faulty conception of how we humans operate – the illusion that we are always in control of our behavior. When you really go deep into it, you have to ask, “who is it that controls?”. There really is no satisfactory answer to that question, because the question supposes the existence of ONE thing that controls behavior.

      So to take biology as inspiration as Steve is doing is to abandon the notion that there is a control center that executes commands. Realizing that kind of system on a computational platform that ironically implements exactly that control paradigm requires that the behavior must emerge somehow from a lower level. That is where AI is headed, in my opinion.

    • stevegrand says:

      what Terren says is true, and I don’t have any problem with functionalism per se, but I do have a number of problems with the dogma/paradigm that arose from it.

      Control is an interesting topic. Does a thermostat control the central heating, or the heating control the thermostat? Control implies a feedback loop and loops have no beginning or end.

      In response to your question, there are pre-digital ideas that ought to be revitalized, imho, but for conceptual reasons rather than anything to do with the limitations of computers. Computers can approximate any kind of machine we like, so although they may do it too slowly for practical purposes, they’re just fine for the job. We don’t need any fancy new components just yet.

      The important point (in fact the subject of an entire book that I’m currently failing to write) is that emergent phenomena (like intelligence) are a product of the CONFIGURATION of things, not the individual properties of the parts. Physics has lost sight of configurations – the mathematics of physics can only deal with equations containing small numbers of variables, not millions. The only massively parallel systems that physics can deal with are linear ones, such as gases, where the individual trajectories of the molecules can be reduced to a handful of numbers (pressure, temperature, volume). Physics doesn’t deal well with things where the configuration is the most important factor (such as electronic circuit design, or biology). As a result, physicists have fallen into a dogmatic reductionism – they take it for granted that if a system has property X, one of its parts must have property X. This leads them to believe that intelligence or consciousness (being mysterious properties of the whole) must be caused by some mysterious property of the parts (quantum behavior, or memristors, or black magic). This just isn’t true. We don’t understand intelligence because we don’t understand the CIRCUITRY of the brain. There’s nothing intrinsically weird about the neurons themselves. The same is true for AI. We don’t understand the CONFIGURATION (whether that means a virtual architecture or a plain old algorithm). It’s not that we need some missing magic ingredient.

      So memristors or qbits or whatever may well be useful to make AI practical, but we don’t need them in order to be successful theoretically – we just lack anything approaching a theory. The digital computer paradigm has been horribly misleading, that’s for sure, but again that’s a mindset problem. Computers themselves are perfectly capable of being intelligent (or rather, implementing a virtual machine that is intelligent). We just don’t know how to do it.

  4. torea says:

    I don’t understand your explanations about how the brain project n-dimensions into 2.
    It seems that the n dimensions, which are sensory inputs or motor outputs, are each processed, at some point, by a different neuron. Thus we get basically: one neuron = one dimension, do I get it right?

    From this, as the 2D cortical map is a set of n neurons arranged in a 2D surface, we still have n-dimensions.
    For example, the vision processes 2D images but each image has a number of dimensions equivalent to the number of pixels. Local groups of pixels are processed in parallel by other neurons (= additional dimensions) higher in the hierarchy which work in order to output local specific projections: orientation, motion, color, etc.
    Each level of the hierarchical process adds a number of dimensions which are used to do some particular projections using the dimensions in the level below.

    • stevegrand says:

      Yes, I see how you got there. But there are other steps that I didn’t mention, I suppose because they form part of my basic assumptions.

      You’re right that the signals on the various input and output neurons is n-dimensional, but this only records the PRESENT state of the world. Brains are there to be intelligent, and intelligence implies that when the world is in one state, the organism changes it to another. This in turn implies memory – some way to recognize elements of the present state and relate them to actions that alter the state.

      So if ten million neurons represent the ten-million-dimensional space at any one instant, we’d need another ten million neurons to represent the next instant. Clearly this isn’t possible – we’d run out of neurons in no time. But also, it doesn’t make any sense to LEAVE the information in this ten-million-dimensional form. It isn’t possible for intelligence to arise in this form. This is a bit hard to explain, but perhaps you can intuit why this is so? It’s only possible to make sense of such a complex world by simplifying it – by finding abstractions that allow such stupendous intricacy to be generalized.

      So yes, the brain as a whole has MORE dimensions than the inputs and outputs alone – a whole ten billion more dimensions. But the purpose of all these interneurons is to form some kind of representation of the space that makes more sense than the original and allows general rules to be applied. What I’m suggesting is that these input dimensions are MAPPED onto a low-dimensional space, such as a sheet, in such a way that generalizations can be made. So now I’m kind of talking about a different context – the neurons that make up the sheet add more dimensions in the original mathematical sense, but we can ignore this. The point is that they are being used to represent a map of the input space. By arranging things in terms of body position, orientation, modality, etc. the neurons are learning the statistics of this high-dimensional space and (I suggest) rendering it into a more usable form. At this higher level of abstraction, the brain is capable of making general statements and storing many millions of memories and associations, which would be too bulky and too meaningless to store in the original form. The dimensionality of the neurons themselves is pretty much irrelevant by this stage because they’re just the medium through which billions of these ten-million-dimensional vectors are represented and handled. It’s kind of like drawing a complex molecular structure on a piece of paper so that you can understand it – the paper itself contains zillions of other molecules, so at a pedantic level you’ve made the situation vastly worse, and yet somehow that’s irrelevant.

      I haven’t explained this very well. I can see it in my head but can’t find the words for it. Am I making more sense yet?

      • torea says:

        Thank you for the clarification. I understand a little better what you mean.

  5. Ben Turner says:

    Hi Steve – I’ve got a number of amorphous sorts of thoughts about this post that I wanted to get on virtual paper before they escape. First off, I appreciate this problem, because one of my current projects deals with use of pattern classification in fMRI data analysis, so I spend perhaps a bit more of my time these days than most people thinking in 70,000-dimensional space (and once you’re there, the leap to 10,000,000-dimensional space is trivial, n’est-ce pas?). Although I’m currently just using a linear classifier, I’ll soon be moving to one that will likely make use of multi-dimensional scaling, or something like it, because I’ll want to find a rotation of my axes such that shrinking and stretching different dimensions will improve the separability of separate classes; although I won’t be doing it, a fairly extreme extension of this would be to find the best rotation and compression/expansion such that all but two of the dimensions are compressed to nothing. Although I have it much easier than the brain, because there are only two classes, and I know the labels for every observation, I wonder if you have any thoughts on how the brain accomplishes this—particularly in light of the fact that, you’re probably quite right, and a single neuron (slash microcolumn slash whatever actually constitutes the basic computational unit in cortex) probably is involved in multiple distributed representations, so simply activating an adjacent neuron may not be sufficient if that adjacent neuron’s distributed buddies aren’t also being activated.

    Incidentally, the lab I’m in focuses mainly on human category learning, but more of the perceptual variety, and in a constrained domain (it’s surprising how little joy undergraduates get out of trying to figure out which Gabor discs are As or Bs… they probably think it’s all a lot of BS, in fact…), so I’m interested to hear your thoughts on the matter. I won’t push that work, but will instead point out that a lot of what you’ve written in the past few posts reminds me strongly of a book I recently read—Anathem by Neal Stephenson—in particular, the key role that creating mental simulations plays in many aspects of human existence. If you find yourself overwhelmed by the spare time that undertaking the most awesome/ambitious project ever must afford, you might consider picking it up… at about 13,000 pages (if I recall), it could be some much-needed exercise of the sort that hacking and thinking fail to provide. Also, you could read it.

    • Ben Turner says:

      See, didn’t write quickly enough, so this thought escaped: just wanted to get your take on graph theoretical approaches to looking at the brain, particularly with respect to functional connectivity. I know there was a flurry of excitement a few years back when people showed that the brain might be a small-world network, which I typically think is a bit of a buzzword, but I also think it has relevance here, in terms of potentially shedding some insight into the exact degree to which regions that are either frequently coactivated or else represent similar information, are proximal to one another physically. Of course, the more interesting question is how the brain manages that, but that’s for another time…

    • stevegrand says:

      Hi Ben, sorry about the delay – I’ve been driving across the continent and didn’t have time to search for wi-fi connections.

      Wow! Lots of interesting thoughts there (and in the later comment).

      > wonder if you have any thoughts on how the brain accomplishes this [dimensional collapse]

      My amateur guess is that it’s ‘just’ a matter of statistical learning of patterns and their spatial organization through competition/cooperation between neurons, rather than any top-down restructuring of dimensions. I found with my robot that if you allow simulated neurons to adjust their tuning whenever they fire, and make sure there is long-range lateral inhibition and short-range facilitation, you encourage cells to develop unique specialisms whilst clustering near to cells with similar tunings. This spontaneously develops in a very similar way to V1 (edge-selective and motion-selective cells, clustered in whorls of progressive orientation preference). I’m hoping the same principle will find the most relevant input patterns and categorize them, regardless of function. We’ll see!

      > you’re probably quite right, and a single neuron (slash microcolumn slash whatever actually constitutes the basic computational unit in cortex) probably is involved in multiple distributed representations

      Good, I’m glad to hear that! I have the beginnings of a possible architecture now, and I think this aspect will be important.

      Thanks for the book suggestion – I MAY one day get some time for reading!

      Re small world networks: I agree about that being a buzzword – as if it’s somehow The Answer – but I also agree that it may say something important about the connectivity and the ‘logic’ behind it. I wish I knew what, though! Perhaps it simply tells us that the network is hierarchical?

      More below…

  6. Ben Turner says:

    So I just left my first ever social psychology brownbag, and amazingly enough, it ended up being on the exact same thing I was talking about above with regards to the Anathem concept of simulation. Jason Mitchell out of Harvard was speaking, and presented some results being prepared for publication regarding the role of medial vPFC. At first he focused largely on the idea that it is a theory of mind region, that shows up when thinking about the self or thinking about others. Also, interestingly showed that for subjects who have a low tolerance for delayed gratification (as measured by how much extra money above $10 you have to give them before they’ll agree to wait until next week to get any money rather than taking the $10 now), it seems that these subjects functionally think of their future selves as different people.

    However, then he showed some results that turned the concept of this region as a “ToM/social cognition area” on its ear: by using similarity in four quite different domains (e.g., now vs later, here vs there, someone like me vs someone unlike me, and another I can’t remember), it became obvious that this region didn’t care at all about social relationships or agency or anything like that, but was either just computing psychological distance for any two concepts, or else was involved in simulating a counterfactual situation (i.e., counterfactual to the present; this could include imagining a possible future or the actual past). A lot of this is related to work by Nira Liberman and Yaacov Trope. Again, this is all new research to me, so I can’t say anything intelligent about it, but I thought it seemed germane.

    One last interesting point that came up is the role of temporoparietal junction cortex: it seems that this area is also often considered as part of some “social cognition” network, but in support of that, someone cited ongoing research that shows activity in this region while suppressing one’s own sensation in favor of imagining the sensation being experienced by someone else. This reminded me of the role the TPJ plays in the models of Corbetta, Patel, and Shulman, where it helps act as a circuit-breaker of sorts for attention. Again, maybe TPJ isn’t some social cognition cog, but rather simply plays some role in switching off or on the appropriate input (external or simulated) at the appropriate time.

    Lest you think I’m a fan of compiling lists of brain areas and their “functions”, I can’ t think of a more pointless exercise (well, okay, I can, but when fMRI studies cost about $7500 a pop, plus countless hours of analysis, I am concerned about the fact that they still seem to be exponentially increasing in number over time). However, although I don’t do anything near as complicated as what you’re undertaking, I still do modeling (of the computational, biologically-plausible variety), and so I know the utility of being able to find evidence of certain computational mechanisms being instantiated biologically. In that sense, being able to have a general purpose simulator or psychological distance-calculator and a stop-doing-this-er in the toolbox is a useful thing. Of course, you’re still left with the problem of HOW to instantiate those in an elegant way, but since it only took evolution about 4 billion years to develop them, as long as your deadline is still a year a way, you only have to be about 4 billion times smarter than evolution, and let’s face it, I doubt you’ll be building a coccyx or an appendix in your digi-critters =)

    • Terren says:

      Ben, fascinating and funny comments, thanks!

    • stevegrand says:

      …ok, so…

      > it seems that these subjects functionally think of their future selves as different people.

      That’s very interesting, and fits with what you said next.

      > either just computing psychological distance for any two concepts, or else was involved in simulating a counterfactual situation

      I’ll have to think about that, but perhaps it’s inevitable that differences in activation will show up toward the top of the hierarchy when the situation is complex (e.g. social)? Lower down, sensations (imagined or real; egocentric or ToM) are going to have more-or-less the same kinds of pattern (just as all English or French sentences are composed of roughly the same set of letters). Only at the top of the hierarchy do “me” and “you”, “now” and “future” look appreciably different. So we might expect such differences to show up in the PFC (and to a lesser extent in other “association areas”). It wouldn’t mean the PFC is “processing” social cognition, just that this is the only place where the categories are going to be distinguishable. Just thinking off the top of my head (pun not intended!) here, so not sure if it has any relevance.

      > I know the utility of being able to find evidence of certain computational mechanisms being instantiated biologically

      Yes, I bet. If only the damn parts would come with labels, though! I keep getting the feeling that, if we’d studied a car engine instead, we’d identify the pistons as “explosion dampers” and the carburettor as an “air moistener”! My guess is that when this does all start to make some sense, we’ll be kicking ourselves for misinterpreting so much. But on the other hand, as you imply, we’ll never figure it out unless we have some kind of functional decomposition.

      Thanks very much for the useful info – I’ll go away and consider my new model in the light of it!

  7. 1) Thank you SO much for sharing your mind with the world FOR FREE. I appreciate sharing a part of it.

    2) I’m toying with some similar concepts in my work (as an engineer) and play (as an artist) that have introduced this challenge of organizing multi-dimensional data according to their proximity. In my work, the “puzzle” is generating a few 2D and 3D geometrical approximations from many (billions) of 1D points (i.e. processing surveyed point cloud data). In my play, my fascination is with life and learning and “affecting” visuals, so I’ve been dreaming about systems I can create to emulate life/learning to generate interesting graphics/animations. I’d like to share with you a recent organizational system I envisioned that might help. I’d love to get your feedback.

    In working with billions of points, efficiency in searching for “close” points (a.k.a. “nearest neighbors”) is critical to any reasonable processing. One common approach to this is the use of octrees (http://en.wikipedia.org/wiki/Octree), which I assume you’re well acquainted with. I’ve used the octree approach in my point cloud processing to break down one huge file containing millions of points into several smaller according to their proximity to exponentially speed up searches. Here’s the concept I’d like to share: imagine a random cloud of points suspended in mid-air. Each point could be envisioned as a small ball with rigid rods connecting its center to those adjacent to it. Connections which mimic something like arcs of electricity between them. If these connections were conductive, there would be a path from any one point to any of the others. All points would have a connected path between them. If an additional point were to be added to the cloud, it would either be a) a point “outside” the cloud whose connection would be to its nearest point or b) a point “within” the cloud whose connection would be to two or more nearest points. The connections would follow a similar operation to that of sparks flowing between nodes, so no “jumping” to distant nodes would occur due to shorted paths of “least resistance” were found. I’ll call these connections “relational neighbors” for the time being.

    Here’s how I imagine this concept could apply: each point in this picture represents a location in three dimensional space, but it could be expanded to more dimensions, like a data point with multiple variable values. If each data point was allowed not only to contain information about its “location” (or multi-dimensional data) but also to its “relational neighbors”, these data could be quickly and easily traversed according to proximity to one another on any level. When new data were to be added, its “relational neighbors” [RN] could be determined by starting from any other existing data, and querying its RN to find which of them it was closest to, it would thereby take the closest path to its “relations in the world” and also effect the re-relating of those it connects to.

    In this way, a static (albeit re-writable) list of points could be replaced by a dynamically traversable set of points. I hope I’m making sense (it sounds good in my head) and am addressing the same challenge you’re describing with self-organizing. Here’s a literal example of the comparison I’m seeing:

    Case 1: One file containing a list of point locations in the format “X, Y, Z”.

    Case 2: One folder containing a set of point files. Each point file has a unique name by which it can be addressed and read. Each point file contains both it’s location data (static) along with relational data in the format of a list of point file names of other points considered “close”.

    Your 16x16x16 cube example is perfect for this. Instead of the point’s order in a list facilitating proximity to one another, it would be a point’s” relational neighbors” (hyperlinks, as it were) that would facilitate this. Paired with an algorithm that updates the list of RNs, proximity could be dynamic. To build on this, there could be several different lists of RNs, inside or outside of a single “point file” – each representing a different hierarchy of proximity.

    Thoughts?

    • stevegrand says:

      Wow, a gigapoint octree! I’m guessing you don’t run that on an iPhone…

      I think I get what you’re saying technically, but I’m not sure what it is you’re trying to do with it. It fills me with fond associations, though, so it must be something interesting! When I was 11 I used to be allowed to wreck the school physics lab during lunchtimes, and one of the things I either built or reconstructed (can’t remember which) was a spark detector, for detecting ionizing radiation. It had a high PD across two slightly separated metal grids, tuned to just below the breakdown voltage, and a stray cosmic ray would be enough to trigger a spark. Except it didn’t work right: the local drop in voltage would cause a capacitive backlash somewhere else, which would create another spark, and another… It was seriously chaotic and really intrigued me! It’s not the same, I know, but it had that “finding the shortest path” quality that you’re talking about. There’s a thunderstorm going on outside as I type, so that would have been a better example, but what you describe gives me a warm, fuzzy feeling, so I’m just going with that. Ooh, and I simulated an array of magnetic compasses once. Each compass would try to turn towards the local north, but because it was itself a magnet it disrupted the field that the others were oriented to. So they adjusted, causing the first to have to adjust all over again. It was quasi-stable and would crystallize, but tiny pertubations could create big and semi-structured, metal-like dislocations…

      Anyway, sorry, I was rambling. Sounds like a great architecture for lots of things. What are you using this for outside of work? I’m always interested in emergent art.

      • Thank you for engaging with me! Super honoring. Sounds like you had pretty special teachers/learning-encouragers growing up. Wrecking a lab sounds fantastic.

        Well, I’m double-dipping a bit, cross-pollinating ideas between what I suspect will be useful in my vocation and what I find fascinating for their own sake. What I’m using this for outside of work is as an experiment to learn about and play with, like yourself, the concept of life and learning.

        I’m not quite sure where it’ll lead (one never does), but I’m sure it will involve me generating visualizations and learning a great amount. I’ll try to paint the picture I’m going for.

        I’m designing a system to emulate life starting in a very simple way, but attempting to go about it flexibly enough to be self-building/evolving. Brewing and baking have been hobbies of mine for a long time, so I’m quite familiar and fond of enzymes, bacteria, and fungi. I’m starting with the concept of a world with a large but finite number of 1D points that will be food for higher dimension organisms. One of the first I imagine are what I call Lipcos [linear point connecting organisms]. I’ll build in constraints to emulate the conservation of mass-energy and express this with digital analogies (like the net amount of data in this world will never change, it will just transfer between states/organisms). I’m tying together many concepts I’ve been musing on, such as the significance/efficiency of relationally defined structures and some models for genetic characteristics and DNA (in math and programming terms).

        As an example, I start with a file that contains a few million point locations. I’ll process this into individual files in a folder. Each file will represent an inanimate point to be consumed. Each will have a unique name. Initially, each point will contain it’s “objective” world coordinate location in its own file/definition. I will then plant an organism on a point. It will not contain any objective coordinate location in itself, only references to other point files. It will also contain all the information it needs to be actuated. Like the order of game play for a board game like Risk, each “living” file in the world will get a chance to “play its hand” at each round. The application would get its instructions for how to handle a file within the file itself (its metabolism, operations, motivations, reproduction, etc.). Like enzymes breaking self-defined points (x,y,z) into to relatively defined points (a host line and distance from its start), it will consume surrounding points in the most beneficial way it can (as defined by metabolism and such). This reduction in data needed to define a point’s location (from three doubles to a long and a double) will be the mechanism that transfers data/energy from a point to a Lipco. Through genetic learning, since the underlying geometry of points in a location will be mostly uniform (except at edges, where other Lipcos could be adapted for), I expects Lipcos well adapted to make beneficial connections (i.e. the most linear/best fit as possible, and thereby good approximations) will thrive there.

        Not only will this teach me a lot about how life works, but I’m very much looking forward to simply watching these Lipcos grow, reproduce, die, and change their environment in ways that I cannot predict. I love the animated visuals.

        Like I said, I’m double-dipping with my work and play, but since my economy currently can’t yet support all the playing and creating I want to do, I’ve got to work with what I got 😉

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: