The opacity of the brain
February 3, 2009 10 Comments
“No amount of knowledge about the hardware of a computer will tell you anything serious about the nature of the software that the computer runs. In the same way, no facts about the activity of the brain could be used to confirm or refute some information-processing model of cognition.” —Max Coltheart
If that’s true then an awful lot of people are wasting their time. But you only have to look at it for a few seconds to see that it contains a logical fallacy.
In the case of the computer we’re asked to look at our knowledge of its hardware, but in the case of the brain we’re told to look at its activity, as if the two were equivalent. Clearly they’re not, so what we can or cannot discern about the software of a computer from knowledge of its hardware has nothing to say about whether we can understand the “software” of the brain from knowledge of its activity.
It may be true that knowledge of the brain’s hardware tells us nothing, but activity is very different. And I’m not convinced the premise is true anyway.
It is fair to say (I think) that you can tell nothing about the software running on a computer by looking at its hardware alone. But if you’re allowed to interfere with that hardware and see the consequences then in principle you might. It becomes a black box problem. Could we infer what the software is doing by, say, disabling individual bytes of memory, or preventing the multiplier from working during specific cycles? As long as we’re allowed to see the outcome in terms of changed behavior then we could, in theory at least, amass enough evidence to infer what’s going on in this way. It would be astoundingly difficult but not impossible.
Neuroscientists use natural or artificial brain lesions in a similar way, in the hope that they can infer the processing principles at work by seeing how behavior changes when different parts are damaged. It hasn’t worked yet, but it has produced many insights and suggestive facts.
Like I say, activity is different from hardware, though. As a rough illustration, in the Olden Days we used to use a transistor radio to find out if our computer was stuck in a loop – you could tell from the repetitive pattern of buzzes caused by radio interference from the processor. If you can tell where activity is taking place, relate that to knowledge of the hardware, see how activity in one place gives rise to changes in activity elsewhere, and watch all this in relation to outward behavior then you do have, I submit, enough information in principle to infer what’s going on.
You probably can’t deduce it but you can infer it. This is why computational neuroscience and biologically inspired AI are so important – they allow us to play with inferences and see if they work. Even though I think Coltheart is being devious by his use of a non sequiteur, I’m sure his sentiment is a fair warning – it’s a real long shot to deduce how the brain works by simply watching it or even interfering with it. But if the evidence you can glean from analysis gives you some ideas about what might plausibly be happening, then you can build a model – even a toy model – and see if your model shows similar behavior. This gives you insight into fundamental principles that might be at work. And once you have principles you can work out the practice.
I often liken the brain to a jet engine: you can describe how a jet engine works in a single sentence, but if you look at one you can see it takes a hell of a lot of pipes and valves to actually put this into practice. In the case of the brain all we can see is the mass of pipes and valves. Underlying them is probably a comparatively simple engineering principle, which needs all that complexity solely to make itself practical. But as yet we don’t have a clue what that underlying principle is, and it’s hard to work back from the practice to the principle. neural modelling works the other way round – starting from a hunch about the possible principle and then looking to see if the pipes and valves you need to implement it correspond to things we know about the brain.
One last observation: it’s an assumption that the brain is divided into hardware and software in a similar sense to that in a computer, and I doubt it’s a fair one. The software that a digital computer can run is totally flexible – a 3D game bears little resemblance to a word processor. At the highest levels, the brain is like that too – I can do almost any kind of computation using my brain (although it’s very instructive to think about what you can’t think about!). But that software runs in the virtual computer of the mind, which itself is the software running on the physical computer of the brain. And the software of the brain is much less flexible. It’s not simply that the brain is a mass of undifferentiated neurons that can be assembled or used in an infinite variety of ways (like storing program code symbols in bytes of memory). Development and exposure to the environment wire up the brain (particularly the cortex) in a relatively limited set of variations on a common theme. And it’s the wiring that controls function – the software is immanent in the hardware. I don’t believe the two are as easily isolated as they are in a PC.
In principle you can take a program that runs on the PC and transfer it to a Mac, or a radically different architecture – that’s the principle of computational functionalism. But it’s misleading to assume that you could do the same (so easily) to the brain. The software of the brain is much more strongly tied to the hardware.
It could easily be that the hardware of the brain is fundamental to the operation of intelligence (at least as we know it). Certainly brains are the only intelligent systems we know of (I accept that an insect’s brain is very different from a dog’s brain, but I’m talking about a kind of intelligence that insects and their like don’t exhibit). That’s why I have no truck with the idea that AI can ignore the brain completely and just look for an “algorithm of thought”. It may well be that once we understand the brain we can find other hardware platforms and other symbolic representations that can achieve the same ends. But just trying to emulate higher thought without reference to the biology that gives rise to it is, to say the least, a long shot. We can emulate individual aspects of thought that way but they never seem to integrate or generalise. This is why symbolic AI is barking up the wrong tree, imho.