The opacity of the brain

My friend Norm at Machines Like Us likes to keep my brain awake, so today he sent me a quote and asked whether I agree with it. It’s such an interesting quote that I thought I’d share it.

“No amount of knowledge about the hardware of a computer will tell you anything serious about the nature of the software that the computer runs. In the same way, no facts about the activity of the brain could be used to confirm or refute some information-processing model of cognition.” —Max Coltheart

If that’s true then an awful lot of people are wasting their time. But you only have to look at it for a few seconds to see that it contains a logical fallacy.

In the case of the computer we’re asked to look at our knowledge of its hardware, but in the case of the brain we’re told to look at its activity, as if the two were equivalent. Clearly they’re not, so what we can or cannot discern about the software of a computer from knowledge of its hardware has nothing to say about whether we can understand the “software” of the brain from knowledge of its activity.

It may be true that knowledge of the brain’s hardware tells us nothing, but activity is very different. And I’m not convinced the premise is true anyway.

It is fair to say (I think) that you can tell nothing about the software running on a computer by looking at its hardware alone. But if you’re allowed to interfere with that hardware and see the consequences then in principle you might. It becomes a black box problem. Could we infer what the software is doing by, say, disabling individual bytes of memory, or preventing the multiplier from working during specific cycles? As long as we’re allowed to see the outcome in terms of changed behavior then we could, in theory at least, amass enough evidence to infer what’s going on in this way. It would be astoundingly difficult but not impossible.

Neuroscientists use natural or artificial brain lesions in a similar way, in the hope that they can infer the processing principles at work by seeing how behavior changes when different parts are damaged. It hasn’t worked yet, but it has produced many insights and suggestive facts.

Like I say, activity is different from hardware, though. As a rough illustration, in the Olden Days we used to use a transistor radio to find out if our computer was stuck in a loop – you could tell from the repetitive pattern of buzzes caused by radio interference from the processor. If you can tell where activity is taking place, relate that to knowledge of the hardware, see how activity in one place gives rise to changes in activity elsewhere, and watch all this in relation to outward behavior then you do have, I submit, enough information in principle to infer what’s going on.

You probably can’t deduce it but you can infer it. This is why computational neuroscience and biologically inspired AI are so important – they allow us to play with inferences and see if they work. Even though I think Coltheart is being devious by his use of a non sequiteur, I’m sure his sentiment is a fair warning – it’s a real long shot to deduce how the brain works by simply watching it or even interfering with it. But if the evidence you can glean from analysis gives you some ideas about what might plausibly be happening, then you can build a model – even a toy model – and see if your model shows similar behavior. This gives you insight into fundamental principles that might be at work. And once you have principles you can work out the practice.

I often liken the brain to a jet engine: you can describe how a jet engine works in a single sentence, but if you look at one you can see it takes a hell of a lot of pipes and valves to actually put this into practice. In the case of the brain all we can see is the mass of pipes and valves. Underlying them is probably a comparatively simple engineering principle, which needs all that complexity solely to make itself practical. But as yet we don’t have a clue what that underlying principle is, and it’s hard to work back from the practice to the principle. neural modelling works the other way round – starting from a hunch about the possible principle and then looking to see if the pipes and valves you need to implement it correspond to things we know about the brain.

One last observation: it’s an assumption that the brain is divided into hardware and software in a similar sense to that in a computer, and I doubt it’s a fair one. The software that a digital computer can run is totally flexible – a 3D game bears little resemblance to a word processor. At the highest levels, the brain is like that too – I can do almost any kind of computation using my brain (although it’s very instructive to think about what you can’t think about!). But that software runs in the virtual computer of the mind, which itself is the software running on the physical computer of the brain. And the software of the brain is much less flexible. It’s not simply that the brain is a mass of undifferentiated neurons that can be assembled or used in an infinite variety of ways (like storing program code symbols in bytes of memory). Development and exposure to the environment wire up the brain (particularly the cortex) in a relatively limited set of variations on a common theme. And it’s the wiring that controls function – the software is immanent in the hardware. I don’t believe the two are as easily isolated as they are in a PC.

In principle you can take a program that runs on the PC and transfer it to a Mac, or a radically different architecture – that’s the principle of computational functionalism. But it’s misleading to assume that you could do the same (so easily) to the brain. The software of the brain is much more strongly tied to the hardware.

It could easily be that the hardware of the brain is fundamental to the operation of intelligence (at least as we know it). Certainly brains are the only intelligent systems we know of (I accept that an insect’s brain is very different from a dog’s brain, but I’m talking about a kind of intelligence that insects and their like don’t exhibit). That’s why I have no truck with the idea that AI can ignore the brain completely and just look for an “algorithm of thought”. It may well be that once we understand the brain we can find other hardware platforms and other symbolic representations that can achieve the same ends. But just trying to emulate higher thought without reference to the biology that gives rise to it is, to say the least, a long shot. We can emulate individual aspects of thought that way but they never seem to integrate or generalise. This is why symbolic AI is barking up the wrong tree, imho.

Thanks, Norm.

Advertisements

About stevegrand
I'm an independent AI and artificial life researcher, interested in oodles and oodles of things but especially the brain. And chocolate. I like chocolate too.

10 Responses to The opacity of the brain

  1. stevegrand says:

    P.S. I’ve had a look around for discussion of this and it seems Coltheart’s intention was to criticize neuroimaging’s ability to tell us anything about the mind. I can’t read the original paper without a subscription, but that seems to be his goal.

    I agree that neuroimaging hasn’t told us very much yet, but that’s not an objection in principle. As far as I can see, an integrated study of the behavior of the brain under different conditions has every chance of inspiring models, which can then be tested and compared to the pipes and valves of reality. Once a model starts to look sufficiently good it’ll make testable predictions and our ability to understand the brain will rapidly improve.

    Now that doesn’t in itself tell us how the mind works (which seems to be Coltheart’s issue). But once you have a working model of the brain then that brain ought itself to have a mind, and now you have far more intimate access to its mechanism (as well as insight into the machine that gives rise to it). I can’t see any reason why you can’t bootstrap your way from an understanding of the brain itself, inspired by observations (from more than just imaging alone), to a full understanding of the mind. You can’t DEDUCE any of this, necessarily, but deduction is not the only way to do science.

    Nor is the brain a computer, with the mind an entirely arbitrary program running on it. That kind of functionalist view is maybe what made the quote seem reasonable.

  2. Brandon says:

    I think this is a really interesting question, and while I don’t disagree with your observation I wonder how far down that particular rabbit hole we really need to go.

    My question would be, if inference from brain activity was a stepping stone to construction of a mind, shouldn’t we be further along by now?

    I know that there have been great strides in the field of AI over the years, in my opinion your work is some of the most interesting, but we seem to know an awful lot about the brain, and we’ll still a long way from developing sentient machines.

    I’m not saying you’re incorrect, I just wonder what you thought is about the lack of progress in correlation to the continued understanding of how the brain functions.

    P.S. I’m a big fan of your books, they have helped me focus my thoughts for my own AI projects. Any plans for another volume?

    • stevegrand says:

      Hi Brandon,

      > My question would be, if inference from brain activity was a stepping stone to construction of a mind, shouldn’t we be further along by now?

      Yes, if it was just a matter of inference. But I think induction is more than inference and inference takes more than knowledge. For instance, what crucial piece of data did Mankind lack before the wheel was invented? Nothing – for thousands of years we knew all we needed to know about rotation, friction, the challenges of pulling large objects, etc. But the “aha” moment still had to enter someone’s head for it all to suddenly make sense. I think the brain is somewhat like that – we already know a vast amount about it but I don’t think we *understand* it at all, except in the simplest aspects. Science has a heavily used euphemism: “we don’t yet fully understand”, which in practice means “we don’t have a ****ing clue”. We don’t yet fully understand the brain.

      What misleads people, imho, is the missing first sheet of paper. If a neuroscientist was asked to write down all that is known about the brain it would run to many volumes. But there would be one page missing – the page that describes the principles of the design. Now if that same neuroscientist was asked to BUILD a brain, unless they had that first sheet of paper they wouldn’t know where to start because they’d lack the understanding that would enable them to cover the gaps in their knowledge. I may be wrong about this, and if so then IBM’s Deep Brain project will be the thing that proves my mistake, but I don’t think you can slavishly copy something in the absence of knowing every single detail of its design UNLESS you know what it is designed to do and how it is trying to go about it.

      Yet if you look at the data long enough, some hunches about what might be going on will eventually come to you, and if you test out those hunches by building models and deriving predictions, one of them may, sooner or later, cause the whole thing to crystallize (and perhaps that’s what will really happen with the IBM project). Rather like the wheel, the moment that this happens is not dependent on the discovery of some crucial piece of data but the recognition of some useful analogy or metaphor. So I think the brain will be our guide but the key insight requires a breakthrough, and that could happen today or it could take another century.

      Personally I don’t think AI has made great strides, except in illuminating our ignorance. Turing was wrong in his prediction that we’d regard computers as intelligent by the year 2000, not because we’ve moved forward more slowly than we expected but because we totally failed to appreciate what intelligence is really like.

      Glad you liked my books. I can kind of feel another one starting to come on, but I don’t have the right idea just yet. Maybe I should blog about it?

  3. JJ says:

    I suspect a lot of the problem with AI is a “can’t see the wood for the trees” issue. Doubtless, there were millions of people who had the technical knowledge to invent the wheel but didn’t, and “the wheel” was invented (and used) thousands of times before any common consensus was agreed as to what it was, how it should be described, and what it should be used for (surely, we haven’t come up with all the answers to that, even now).

    With software that allows the brain to build and rebuild its own hardware, and hardware which (simply through use in a particular way) alters the behaviour of its own software, it will never be straightforward to come up with a simple equation to describe all “minds”. However, it would be little easier to describe the logic of all the travel we undertake by treating all vehicles ever built as eyes/ears/etc., all roads/railways/canals/cyclepaths/flightpaths/etc. as neural networks, and all journeys ever taken as brain activity.

    The thing that has always impressed me about your approach to AI is the willingness to appreciate that the learning process has an important part to play in the creation of the mind, rather than assuming that it is possible to program a somehow perfectly finished mind from scratch. Now, I tend to think that the learning process itself is the essential part of the mind, just as the process of living life is the essential part of a journey: the crucial thing is understanding where the tools and their raw materials end and the process begins.

    I and my business partner have brains which work in very different ways. We may have the same complement of basic organs to assist us, but life and/or genetics has given us very different ways of assessing, describing, and interacting with the world around us. Either of us could describe the other’s brain as flawed for its failure to do the job we believe is relevant in the way we believe is correct, but both do jobs that are equally important (allowing us to function equally effectively in society),and both have their strengths as well as weaknesses. Personally, I believe that the only way to fully understand why we think as we do is to accept a simplicity and commonality of process, and then look at the inputs and outputs which lead us to operate the individual way we do (whether directly learnt in this lifetime, or “pre-programmed” in ancestors’ lifetimes). A simple example of this, which humans seem to have an innate problem with, would be to accept that all races and cultures are the same in terms of humanity, and then come to understand that commonality through learning about the processes which lead to behavioural and physical differences in local sub-sets, rather than assuming major fundamental differences between the sub-sets just because of different appearance or dining ritual.

    Turning that around, the wheel and the jet engine can be looked at as very simple ideas, later enhanced by the “mass of pipes and valves” (or pneumatic tyres!) to make the initial idea suitable to a particular environment…is there really so much fundamental difference between a fruit fly’s brain and ours, or is it just that extra piping (I speak as the grandson of the chief engineer on the Meteor jet project, and great-nephew of the inventor of the spare wheel, so I have no intention of under-valuing jets or wheels in that comment!).

  4. maninalift says:

    What I find most fascinating about the brain is that the (unknown) principles of its functioning are apparently developmental. It is the set of conditions which forms a useful computer useful computer though the process of it’s growth, infant and adult development. It’s fundamental organizational principle is to develop and learn, not to form robust layers of abstraction as a modern computer does.

    Perhaps the correct analogy is to call the genetically encoded stuff “hardware” and the rest “software”? Not only do I think that is not a good mapping but I don’t even think it is a meaningful statement. DNA is not a blueprint but a part of a machine that builds things, it is not so easy to separate the mechanism from the design.

    Back to the main point — “Yet if you look at the data long enough, some hunches about what might be going on will eventually come to you” — that is really the key, and more than this you might not answer your original question but you may instead come to conclusions that profoundly change your perspective of that question. Consider the controversial topic of consciousness. A person of a spiritual disposition may say that one can never understand consciousness through scientific investigation and this may be true in the same sense that it is not possible to prove that 1+1=2. Nevertheless in the last 50 years the insights into consciousness from a wide range of scientific disciplines have been profound though they may not seem to take one any closer to a “what is”.

    • stevegrand says:

      Yes, I think there’s no hard line at all between development and learning. Each is the execution of a self-organising process, each involves genes switching on and off, and each involves interaction with the environment (so much of the brain’s structure at all levels comes about as a result of structured sensory input from the world or the lack of same).

      Sadly, both are also hard to reverse. We’re graudally discovering how vital it is that children receive the right kinds of care and the right experiences at the right time if their brains are to construct themselves successfully. “Modern parenting” doesn’t take enough account of this, imho.

  5. “It could easily be that the hardware of the brain is fundamental to the operation of intelligence (at least as we know it). Certainly brains are the only intelligent systems we know of (I accept that an insect’s brain is very different from a dog’s brain, but I’m talking about a kind of intelligence that insects and their like don’t exhibit). That’s why I have no truck with the idea that AI can ignore the brain completely and just look for an “algorithm of thought”. It may well be that once we understand the brain we can find other hardware platforms and other symbolic representations that can achieve the same ends. But just trying to emulate higher thought without reference to the biology that gives rise to it is, to say the least, a long shot. We can emulate individual aspects of thought that way but they never seem to integrate or generalise. This is why symbolic AI is barking up the wrong tree, imho.”

    I am so very glad I stumbled onto the Machines like us website were I saw your interview and followed it to your sites.

    I am a physician and medical director for a neuropsychiatric center. I consult to a small tech startup company interested in my clinical background and understanding as a neuropsychiatrist.

    Separately I consult to a clinical research laboratory interested in developing new forms of clinical lab tests for diagonosis and treatment of various neuropsychiatric and neuroendocrine and neuroimmune disorders — and they are compulsively looking at the pathways and circuits interlinking these systems — and interested in me from a clinical neuroscience perspective.

    I have become highly personally interested in robotics & AI – having been a student of complex systems and all of the above-mentioned interconnections from a clinical perspective.

    Tilden’s little bug-like devices do demonstrate a particular flavor of primitive & instinctual intelligence; yet these are the instinctual acts of cells and people alike (well, people under certain circumstances).

    I wish you luck as I think you are on a better path than most; it seems the robotics company I am consulting with is pursuing a similar path.

    I hope you have fun on your trek through the states and if you find the time I would hope you might indulge in some deep banter with a naive novice.

    😉

    • stevegrand says:

      Hi Desiderio,

      Thanks. By all means – I’ll banter about robotics with a naive novice if you’ll banter about clinical neuroscience with an equally naive novice! I’m on the road a lot at the moment but fire away and I’ll get back to you whenever I can.

      Cheers,
      Steve

  6. Steve – your humility in the face of the depth and breath of your accomplishments (Creatures, Lucy ET AL) and writings (both books and what I have been able to glean on the internet) is refreshing. I do not have my hands on hardcopies of your books yet (ordered on Amazon yesterday), but I must say I am eagerly awaiting them.

    I am on my way myself to Vegas with the wife for a mix of business and pleasure at a training in nutrition, clinical endocrinology and Age-Management Medicine – I’ll have more time when I get back to do more of my own leisurely reading.

    I am saddened to read of other things in your life at the moment and can only wish you the best – this from a man who has accomplished much by the measuring-rods of others and carries a lot of ‘issues’ over the trouble a failed first marriage has caused some of his children.

    I am VERY interested in your understanding and designs of robotic mechanisms with ‘brains’ modeled after our own “columnar-units design” — did you get the idea after reading V. Montcastle? or anyone else? What shape does it take in hardware or is it only virtually represented in software on your parallel-connected computers in Lucy’s brain?

    Again – Best regards,

    Desiderio

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: