Seeing the wood for the trees

A while back I wrote a piece about bonobos and chimpanzees – how different they are and how human political differences might be a reflection of these two ways of life.

One thing that struck me about bonobos is that they are separated from chimpanzees by nothing more than a river. The Congo River is apparently what separated two populations of their common ancestors a couple of million years ago and prevented them from interbreeding. One population went on to become modern chimpanzees and the other bonobos. Once their genes were no longer able to mingle, it was inevitable that they would diverge from each other in both physiognomy and behavior.

What was it about the south side of the Congo that favored collaboration and appeasement instead of dominance and aggression? I have no idea, but it needn’t have been very much at all. The tiniest difference in habitat could lead to a change in culture (such as a shift in the roles of males and females) and this in turn would have knock-on effects. Positive feedback would soon lock in these changes and drive an expanding wedge between the two populations.

In modern humans, chimpanzee-like right-wing behaviors and bonobo-like left-wing behaviors coexist, but very uneasily. Empathy, for instance, serves different purposes in each mode: “socialism” (with a small “s”) is fundamentally based upon empathy in the form of sympathy – the understanding that other people suffer like we do, and if we help and support each other we can minimise this suffering for all. “capitalism”, meanwhile, makes use of empathy to outwit other people. A CEO who can walk into a business meeting and immediately grasp what everyone around the table is thinking will come away with a better deal. The consequences of this difference are profound. To a libertarian conservative, for instance, government is an unwanted imposition – a Them who controls Us. It’s an Alpha Male to be feared, opposed and ideally got rid of. Meanwhile, from the perspective of a liberal, the government actually is us; it is the collective will; the way we look out for each other. It’s no wonder the two sides fail to understand each other. In America and the UK this tension is very strong at the moment and it sometimes makes me feel that humans must be descended from the interbreeding of two previously separated species, because the two points of view aren’t very compatible and evolution might have been expected to opt for either one or the other. Bonobos and chimpanzees certainly did.

All this came back into my mind this morning when I read this article in Machines Like Us. The gist of it is that Australopithecus afarensis appears to have walked upright on two feet, in roughly the front-of-foot way that we humans do, rather than the bowlegged way that other primates do. And they did this almost four million years ago at the latest – around the time the human bloodline separated from the chimp/bonobo bloodline.

It made me wonder what kind of “Congo river” might have separated the two lines, and it’s really not hard to imagine. Chimpanzee and orangutan feet are designed for living in trees – their mastery of the arboreal mode of transport is astounding from the perspective of a human being, whose feet are utterly useless for dangling from branches. Every time I watch a primate leap confidently from branch to branch I find myself in awe and not a little envious.

But suppose the trees thin out? There are clear limits to how far apart branches can be whilst still being able to support two hundred pounds of leaping flesh. When trees get too thin on the ground, primates have to climb down and walk. For a quick dash, followed by a rapid climb back into safety, chimpanzee feet are ideal, but there will come a point when efficient running becomes far more important than efficient climbing and leaping. There are no tigers in the trees (which is basically why primates live in them), so being a bit ungainly in the canopy is not nearly as serious as being unable to reach the safety of the next trunk. The evolutionary advantage of good running feet would very quickly be tested, once running became necessary.

And what then? Once you perform better on the ground than in the canopy, you can free your hands. You have to watch out more carefully for predators and find ingeneous ways to thwart them (even using sticks as weapons, maybe). Sex becomes different. Meetings tend to happen face-to-face instead of face-to-ass. Perhaps females carrying young need protection. You are presented with vistas that exceed a mere wall of leaves. A thousand things have suddenly changed, and each of those thousand things would go on to create a thousand other changes. And all because the trees got too far apart to leap between.

Perhaps this was all it took to make the human race? Perhaps we’re just the descendants of incompetent leapers who had to evolve bizarre and expensive tricks like literature and intelligence in order to survive on the ground when we could no longer stay hidden in the trees. As we dash (by elevator) from the safety of our office-trees to the safety of our house-trees and climb the wooden stairs to bed, on feet and hips that are very much designed for the ground, it’s sobering to think that most of what we see around us might have been caused by a bit of a lingering drought, four million years ago.

Maybe I should go for a run…

Advertisements

Brainstorm 5: joining up the dots

I promised myself I’d blog about my thoughts, even if I don’t really have any and keep going round in circles. Partly I just want to document the creative process honestly – so this includes the inevitable days when things aren’t coming together – and partly it helps me if I try to explain things to people. So permit me to ramble incoherently for a while.

I’m trying to think about associations. In one sense the stuff I’ve already talked about is associative: a line segment is an association between a certain set of pixels. A cortical map that recognizes faces probably does so by associating facial features and their relative positions. I’m assuming that each of these things is then denoted by a specific point in space on the real estate of the brain – oriented lines in V1 and faces in the FFA. In both these cases there are several features at one level, which are associated and brought together at a higher level. A bunch of dots maketh one line. Two dark blobs and a line in the right arrangement maketh a face. A common assumption (which may not be true) is that neurons do this explicitly: the dendritic field of a visual neuron might synapse onto a particular pattern of LGN fibres carrying retinal pixel data. When this pattern of pixels becomes active, the neuron fires. That specific neuron – that point on the self-organizing map – therefore means “I can see a line at 45 degrees in this part of the visual field.”

But the brain also supports many other kinds of associative link. Seeing a fir tree makes me think of Christmas, for instance. So does smelling cooked turkey. Is there a neuron that represents Christmas, which synapses onto neurons representing fir trees and turkeys? Perhaps, perhaps not. There isn’t an obvious shift in levels of representation here.

Not only do turkeys make me think of Christmas, but Christmas makes me think of turkeys. That implies a bidirectional link. Such a thing may actually be a general feature, despite the unidirectional implication of the “line-detector neuron” hypothesis. If I imagine a line at 45 degrees, this isn’t just an abstract concept or symbol in my mind. I can actually see the line. I can trace it with my finger. If I imagine a fir tree I can see that too. So in all likelihood, the entire abstraction process is bidirectional and thus features can be reconstructed top-down, as well as percepts being constructed/recognized bottom-up.

But even so, loose associations like “red reminds me of danger” don’t sound like the same sort of association as “these dots form a line”. A line has a name – it’s a 45-degree line at position x,y – but what would you call the concept that red reminds me of danger? It’s just an association, not a thing. There’s no higher-level concept for which “red” and “danger” are its characteristic features. It’s just a nameless fact.

How about a melody? I know hundreds of tunes, and the interesting thing is, they’re all made from the same set of notes. The features aren’t what define a melody, it’s the temporal sequence of those features; how they’re associated through time. Certainly we can’t imagine there being a neuron that represents “Auld Lang Syne”, whose dendrites synapse onto our auditory cortex’s representations of the different pitches that are contained in the tune. The melody is a set of associations with a distinct sequence and a set of time intervals. If someone starts playing the tune and then stops in the middle I’ll be troubled, because I’m anticipating the next note and it fails to arrive. Come to that, there’s a piano piece by Rick Wakeman that ends in a glissando, and Wakeman doesn’t quite hit the last note. It drives me nuts, and yet how do I even know there should be another note? I’m inferring it from the structure. Interestingly, someone could play a phrase from the middle of “Auld Lang Syne” and I’d still be able to recognize it. Perhaps the tune is represented by many overlapping short pitch sequences? But if so, then this cluster of representations is collectively associated with its title and acts as a unified whole.

Thinking about anticipating the next note in a tune reminds me of my primary goal: a representation that’s capable of simulating the world by assembling predictions. State A usually leads to state B, so if I imagine state A, state B will come to mind next and I’ll have a sense of personal narrative. I’ll be able to plan, speculate, tell myself stories, relive a past event, relive it as if I’d said something wittier at the time, etc. Predictions are a kind of association too, but between what? A moving 45-degree line at one spot on the retina tends to lead to the sensation of a 45-degree line at another spot, shortly afterwards. That’s a predictive association and it’s easy to imagine how such a thing can become encoded in the brain. But Turkeys don’t lead to Christmas. More general predictions arise out of situations, not objects. If you see a turkey and a butcher, and catch a glint in the butcher’s eye, then you can probably make a prediction, but what are the rules that are encoded here? What kind of representation are we dealing with?

“Going to the dentist hurts” is another kind of association. “I love that woman” is of a similar kind. These are affective associations and all the evidence shows that they’re very important, not only for the formation of memories (which form more quickly and thoroughly when there’s some emotional content), but also for the creation of goal-directed behavior. We tend to seek pleasure and avoid pain (and by the time we’re grown up, most of us can even withstand a little pain in the expectation of a future reward).

A plan is the predictive association of events and situations, leading from a known starting point to a desired goal, taking into account the reward and punishment (as defined by affective associations) along the route. So now we have two kinds of association that interact!

To some extent I can see that the meaning of an associative link is determined by what kind of thing it is linking. The links themselves may not be qualitatively different – it’s just the context. Affective associations link memories (often episodic ones) with the emotional centers of the brain (e.g. the amygdala). Objects can be linked to actions (a hammer is associated with a particular arm movement). Situations predict consequences. Cognitive maps link objects with their locations. Linguistic areas link objects, actions and emotions with nouns, verbs and adjectives/adverbs. But there do seem to be some questions about the nature of these links and to what extent they differ in terms of circuitry.

Then there’s the question of temporary associations. And deliberate associations. Remembering where I left my car keys is not the same as recording the fact that divorce is unpleasant. The latter is a semantic memory and the former is episodic, or at least declarative. Tomorrow I’ll put my car keys down somewhere else, and that will form a new association. The old one may still be there, in some vague sense, and I may one day develop a sense of where I usually leave my keys, but in general these associations are transient (and all too easily forgotten).

Binding is a form of temporary association. That ball is green; there’s a person to my right; the cup is on the table.

And attention is closely connected with the formation or heightening of associations. For instance, in Creatures I had a concept called “IT”. “IT” was the object currently being attended to, so if a norn shifted its attention, “IT” would change, and if the norn decided to “pick IT up”, the verb knew which noun to apply to. In a more sophisticated artificial brain, this idea has to be more comprehensive. We may need two or more ITs, to form the subject and object of an action. We need to remember where IT is, in various coordinate frames, so that we can reach out and grab IT or look towards IT or run away from IT. We need to know how big IT is, what color IT is, who IT belongs to, etc. These are all associations.

Perhaps there are large-scale functional associations, too. In other words, data from one space can be associated with another space temporarily to perform some function. What came to mind that made me think of this is the possibility that we have specialized cortical machinery for rotating images, perhaps developed for a specific purpose, and yet I can choose, any time I like, to rotate an image of a car, or a cat, or my apartment. If I imagine my apartment from above, I’m using some kind of machinery to manipulate a particular set of data points (after all, I’ve never seen my apartment from above, so this isn’t memory). Now I’m imagining my own body from above – I surely can’t have another machine for rotating bodies, so somehow I’m routing information about the layout of my apartment or the shape of my body through to a piece of machinery (which, incidentally, is likely to be cortical and hence will have self-organized using the same rules that created the representation of my apartment and the ability to type these words). Routing signals from one place to another is another kind of association.

Language is interesting (I realize that’s a bit of an understatement!). I don’t believe the Chomskyan idea that grammar is hard-wired into the brain. I think that’s missing the point. I prefer the perspective that the brain is wired to think, and grammar is a reflection of how the brain thinks. [noun][verb][noun] seems to be a fundamental component of thought. “Janet likes John.” “John is a boy.” “John pokes Janet with a stick.” Objects are associated with each other via actions, and both the objects and actions can be modulated (linguistically, adverbs modulate actions; adjectives modify or specify objects). At some level all thought has this structure, and language just reflects that (and allows us to transfer thoughts from one brain to another). But the level at which this happens can be very far removed from that of discrete symbols and simple associations. Many predictions can be couched in linguistic terms: IF [he] [is threatening] [me] AND [I][run away from][him] THEN [I][will be][safe]. IF [I][am approaching][an obstacle]AND NOT ([I][turn]) THEN [I][hurt]. But other predictions are much more fluid and continuous: In my head I’m imagining water flowing over a waterfall, turning a waterwheel, which turns a shaft, which grinds flour between two millstones. I can see this happening – it’s not just a symbolic statement. I can feel the forces; I can hear the sound; I can imagine what will happen if the water flow gets too strong and the shaft snaps. Symbolic representations and simple linear associations won’t cut it to encode such predictive power. I have a real model of the laws of physics in my head, and can apply it to objects I’ve never even seen before, then imagine consequences that are accurate, visual and dynamic. So at one level, grammar is a good model for many kinds of association, including predictive associations, but at another it’s not. Are these the same processes – the same basic mechanism – just operating at different levels of abstraction, or are they different mechanisms?

These predictions are conditional. In the linguistic examples above, there’s always an IF and a set of conditionals. In the more fluid example of the imaginary waterfall, there are mathematical functions being expressed, and since a function has dependent variables, this is a conditional concept too. High-level motor actions are also conditional: walking consists of a sequence of associations between primitive actions, modulated by feedback and linked by conditional constructs such as “do until” or “do while”.

So, associations can be formed and broken, switched on and off, made dependent on other associations, apply specifically or broadly, embody sequence and timing and probability, form categories and hierarchies or link things without implying a unifying concept. They can implement rules and laws as well as facts. They may or may not be commutative. They can be manipulated top-down or formed bottom-up… SOMEHOW all this needs to be incorporated into a coherent scheme. I don’t need to understand how the entire human brain works – I’m just trying to create a highly simplified animal-like brain for a computer game. But brains do some impressive things (nine-tenths of which most AI researchers and philosophers forget about when they’re coming up with new theories). I need to find a representation and a set of mechanisms for defining associations that have many of these properties, so that my creatures can imagine possible futures, plan their day, get from A to B and generalize from past experiences. So far I don’t have any great ideas for a coherent and elegant scheme, but at least I have a list of requirements, now.

I think the next thing to do is think more about the kinds of representation I need – how best to represent and compute things like where the creature is in space, what kind of situation it is in, what the properties of objects are, how actions are performed. Even though I’d like most of this to emerge spontaneously, I should at least second-guess it to see what we might be dealing with. If I lay out a map of the perceptual and motor world, maybe the links between points on this map (representing the various kinds of associations) will start to make sense.

Or I could go for a run. Yes, I like that thought better.

“Memristor minds: The future of artificial intelligence”

Ever the guardian of my intellectual development, Norm sent me a link to a New Scientist article on memristors, today. I’d never heard of them, but the article was interesting for both good and bad reasons, so I thought I’d share my impressions.

Here’s a short summary: The memristor is apparently a “missing component” in electronics, hypothesized by Leon Chua in 1971, to sit alongside the well known resistor, capacitor and inductor, but at the time it was unknown as a physical device. In the early years of this century, Stan Williams developed a nanoscale device that he believed fit the bill. And then Max di Ventra, a physicist at UCSD, linked this work with some research on a slime mould, which showed that they are capable of “predicting” a future state in a periodic environmental change. He suggested that this is a biophysical equivalent to a memristor. The article then goes on to suggest that neural synapses work the same way, and so this must surely be the big missing insight that has prevented us from understanding the brain and creating artificial intelligence.

But the article troubles me for a couple of reasons and I can’t help thinking there’s a serious problem with the way physicists and mathematicians tend to think about biology. Firstly, here’s a quote from the article:

“To Chua, this all points to a home truth. Despite years of effort, attempts to build an electronic intelligence that can mimic the awesome power of a brain have seen little success. And that might be simply because we were lacking the crucial electronic components – memristors.”

Hmm… So exactly what years of effort would that be, then? VERY few people have ever attempted to “build an electronic intelligence”. We simply don’t do that – we use computers! 

Sure, a computer is an electronic device, but the whole damned point of them is that they are machines that can emulate any other machine. So they can emulate memristors too. They don’t actually have to be MADE of them in order to do that – they simply simulate them in code, like they simulate everything else. And I’m sure I’ve many times written code that has a state memory like a memristor. I didn’t know there was a named physical device that works in the same way, and it’s very interesting that there is, because it might give us new analogies and insights. But if I needed something to behave like that I could have coded it any time I wanted to. It’s meaningless to say that we’ve been stuck because we lacked a new type of electronic component. Only a physicist would confuse hardware and software like that! It boggles my mind.

And then I’m a little perplexed about a missing electronic component we DO know about. Maybe someone can help me with this? Chua’s work apparently hypothesized the memristor as a fourth component to add the existing resistor, capacitor and inductor. But where’s the transistor? Isn’t that a fundamental component? It’s a resistor, after a fashion, but surely it’s a fundamental building block in its own right, because it has the ability to allow a voltage to modulate a current – without them almost no electronic circuits would do anything useful!

I hate to say it, but I wonder if that’s a comment on the minds of physicists, too? It’s the transistor (or vacuum tube) that makes the difference between a static circuit, for which the mathematics of physics works well, and a dynamic circuit, for which it doesn’t. The capacitor is a dynamic system too, but only for a moment and then it settles down into something nice and easy to write equations for. It’s only when you add transistors and their consequent ability to generate feedback that the system really starts to dance and sing, and then the equations stop being much use.

The real glaring insight that electronics gives us, in my not-always-terribly-humble opinion, is the realization that sometimes classical science has a bad habit of being obsessed with “quantities” and ignoring or even sometimes denying the existence of “qualities”. Two electronic systems might have precisely the same mass, complexity and constituent substances, for instance, but be wired up in a different arrangement, producing radically different results. The reductionism implicit in much of physics can’t “see” the difference between the two circuits – because it’s something purely qualitative, not quantitative.

It’s the same with the brain. The reason we don’t understand the brain has NOTHING of significance to do with some “missing component”. It has nothing to do with quantum uncertainty or any other reductionistic claptrap. The reason we don’t understand the brain is that we don’t understand the CIRCUIT. We don’t understand the system as a whole. Memories, thoughts, ideas and the Self are not properties of the brain’s components, they are properties of its organisation. It’s very hard to understand organisations – I could easily give you an electronic circuit diagram out of context and it might take you days or weeks to figure out how it works and exactly what it does. But you could know everything you need to know about the properties of its resistors, capacitors,  inductors and transistors, and even it’s memristors. You could weigh it and measure it all you liked and it would tell you nothing. Organization is not amenable to understanding using the tools of classical Physics.

Life and mind are qualitiative constructs. Looking for some special elixir vitae is completely missing the point. The article is very interesting and I plan to look up more information. Memristors may well provide a useful analogy that gives us some hints and insights about localised properties of brains, and that may steer us towards making more sense of the circuitry of intelligence. However, to suggest that we’ve got it all wrong because we didn’t have the right component in our toolbox for making our “electronic brains” is just nonsense. Electronic components are the province of physics, but electronic design is not. Synapses may be the province of physics too, but biology is not. Biology is a branch of cybernetics, which has a very different mindset (or did until physicists took it over and turned it into information theory).

P.S. I sort of see why transistors are missing now – at the mathematical level of description of Chua’s work, I guess a transistor is just a resistor, because both of them convert between voltage and current. Time only really enters into the equations as an integral, and the deeply nonlinear consequences of the transistor don’t really apply when you consider it as a single isolated component. But that was my point – once you wire them up into circuits all of this is pretty much irrelevant. It’s circuits that matter for intelligence. Minds are emergent properties of organisations. Looking for a “magic” component is just a modern-day form of vitalism.