“Memristor minds: The future of artificial intelligence”

Ever the guardian of my intellectual development, Norm sent me a link to a New Scientist article on memristors, today. I’d never heard of them, but the article was interesting for both good and bad reasons, so I thought I’d share my impressions.

Here’s a short summary: The memristor is apparently a “missing component” in electronics, hypothesized by Leon Chua in 1971, to sit alongside the well known resistor, capacitor and inductor, but at the time it was unknown as a physical device. In the early years of this century, Stan Williams developed a nanoscale device that he believed fit the bill. And then Max di Ventra, a physicist at UCSD, linked this work with some research on a slime mould, which showed that they are capable of “predicting” a future state in a periodic environmental change. He suggested that this is a biophysical equivalent to a memristor. The article then goes on to suggest that neural synapses work the same way, and so this must surely be the big missing insight that has prevented us from understanding the brain and creating artificial intelligence.

But the article troubles me for a couple of reasons and I can’t help thinking there’s a serious problem with the way physicists and mathematicians tend to think about biology. Firstly, here’s a quote from the article:

“To Chua, this all points to a home truth. Despite years of effort, attempts to build an electronic intelligence that can mimic the awesome power of a brain have seen little success. And that might be simply because we were lacking the crucial electronic components – memristors.”

Hmm… So exactly what years of effort would that be, then? VERY few people have ever attempted to “build an electronic intelligence”. We simply don’t do that – we use computers! 

Sure, a computer is an electronic device, but the whole damned point of them is that they are machines that can emulate any other machine. So they can emulate memristors too. They don’t actually have to be MADE of them in order to do that – they simply simulate them in code, like they simulate everything else. And I’m sure I’ve many times written code that has a state memory like a memristor. I didn’t know there was a named physical device that works in the same way, and it’s very interesting that there is, because it might give us new analogies and insights. But if I needed something to behave like that I could have coded it any time I wanted to. It’s meaningless to say that we’ve been stuck because we lacked a new type of electronic component. Only a physicist would confuse hardware and software like that! It boggles my mind.

And then I’m a little perplexed about a missing electronic component we DO know about. Maybe someone can help me with this? Chua’s work apparently hypothesized the memristor as a fourth component to add the existing resistor, capacitor and inductor. But where’s the transistor? Isn’t that a fundamental component? It’s a resistor, after a fashion, but surely it’s a fundamental building block in its own right, because it has the ability to allow a voltage to modulate a current – without them almost no electronic circuits would do anything useful!

I hate to say it, but I wonder if that’s a comment on the minds of physicists, too? It’s the transistor (or vacuum tube) that makes the difference between a static circuit, for which the mathematics of physics works well, and a dynamic circuit, for which it doesn’t. The capacitor is a dynamic system too, but only for a moment and then it settles down into something nice and easy to write equations for. It’s only when you add transistors and their consequent ability to generate feedback that the system really starts to dance and sing, and then the equations stop being much use.

The real glaring insight that electronics gives us, in my not-always-terribly-humble opinion, is the realization that sometimes classical science has a bad habit of being obsessed with “quantities” and ignoring or even sometimes denying the existence of “qualities”. Two electronic systems might have precisely the same mass, complexity and constituent substances, for instance, but be wired up in a different arrangement, producing radically different results. The reductionism implicit in much of physics can’t “see” the difference between the two circuits – because it’s something purely qualitative, not quantitative.

It’s the same with the brain. The reason we don’t understand the brain has NOTHING of significance to do with some “missing component”. It has nothing to do with quantum uncertainty or any other reductionistic claptrap. The reason we don’t understand the brain is that we don’t understand the CIRCUIT. We don’t understand the system as a whole. Memories, thoughts, ideas and the Self are not properties of the brain’s components, they are properties of its organisation. It’s very hard to understand organisations – I could easily give you an electronic circuit diagram out of context and it might take you days or weeks to figure out how it works and exactly what it does. But you could know everything you need to know about the properties of its resistors, capacitors,  inductors and transistors, and even it’s memristors. You could weigh it and measure it all you liked and it would tell you nothing. Organization is not amenable to understanding using the tools of classical Physics.

Life and mind are qualitiative constructs. Looking for some special elixir vitae is completely missing the point. The article is very interesting and I plan to look up more information. Memristors may well provide a useful analogy that gives us some hints and insights about localised properties of brains, and that may steer us towards making more sense of the circuitry of intelligence. However, to suggest that we’ve got it all wrong because we didn’t have the right component in our toolbox for making our “electronic brains” is just nonsense. Electronic components are the province of physics, but electronic design is not. Synapses may be the province of physics too, but biology is not. Biology is a branch of cybernetics, which has a very different mindset (or did until physicists took it over and turned it into information theory).

P.S. I sort of see why transistors are missing now – at the mathematical level of description of Chua’s work, I guess a transistor is just a resistor, because both of them convert between voltage and current. Time only really enters into the equations as an integral, and the deeply nonlinear consequences of the transistor don’t really apply when you consider it as a single isolated component. But that was my point – once you wire them up into circuits all of this is pretty much irrelevant. It’s circuits that matter for intelligence. Minds are emergent properties of organisations. Looking for a “magic” component is just a modern-day form of vitalism.

About stevegrand
I'm an independent AI and artificial life researcher, interested in oodles and oodles of things but especially the brain. And chocolate. I like chocolate too.

29 Responses to “Memristor minds: The future of artificial intelligence”

  1. Pius Agius says:

    Hi Steve

    I saw this article and another a few months back and I am still trying to wrap my head around the concept. When you dabble in electronics you get so familiar with the resistor, capacitor,inductors and especially the transistor. I get the notion that I can do something with them because I can actually see and feel them. The transistor or ,transfer resistor, can be configured in so many wonderous ways. What I can understand the memristor is a much smaller component and companies are already getting patents on the use of these new things.

    The new device can ‘remember’ the previous state so it does act like a memory. Since it is smaller it can pack more memory units per volume. However the authors of this article left it at that. The step from a memory unit to an intelligence is an awful large step. Though they may mimic the human synapse in this one aspect how do you use them to build a brain. Human synapses have the ability to connect to other synapses in an array of staggering complexity.
    If, and this is a big if, the pattern of synapses could be found in one instant,it changes since the patterns are in a constant state of dynamic flux.

    I agree Steve , this new memory unit is interesting, but intelligence is more than mere memory. My concern is that we have not figured out the blueprints of the basic complexity of the intelligence we are trying to emulate and looking for a quick fix will not do.

    One final note ,I noticed that your analysis of time entering as an integral was quite astute and a rather refreshing mathematical observation.

    I really enjoy the long articles you write for they really make me think.

    Take care

    Pius

    • stevegrand says:

      Goodness, I’ve never said anything mathematically astute in my life before! 😉

      “Transfer resistor” – I’d quite forgotten that’s where the name came from. Thanks for the reminder!

  2. Alex says:

    Hi Steve:
    Nice article. I agree with you completely that a missing component in electronics is not the secret ingredient for intelligence because the memsistor can be simulated in a computer. But it might be a good metaphor for finding similar structures in the brain, and it might enable us to make computers that are better at simulating the brain (per unit volume): you need more than one transistor (and memory) to simulate one memsistor. Also as you mentioned; the transistor its kind of like a resistor but you can’t combine a resistor, capacitor or inductor to make one; that’s why they call it a new component.

    • stevegrand says:

      Hi Alex,
      Thanks. Yes, I agree that memristors are a Good Thing – especially if they can produce high-density NV memory – and perhaps even better if they could be used to create large switching grids, so that we can emulate dendritic migration in hardware. I didn’t mean to sound negative about the concept itself It was only the ideological reductionism and implicit vitalism that irritated me.

      > the transistor its kind of like a resistor but you can’t combine a resistor, capacitor or inductor to make one; that’s why they call it a new component.

      That was the thing, though. Chua’s work DIDN’T count the transistor as a fundamental component. But eventually I squinted at the equations and decided it didn’t count at the level of description he was using. Since he was only considering each component in isolation, responding to one of the fundamental variables of charge, voltage, current and flux, he would presumably have counted the transistor as a resistor. Or maybe two resistors, depending on whether you visualize the base being tied and the collector-emitter acting as a fixed resistance to a current, causing a voltage drop, or a voltage change at the base causing a current change between collector and emitter. Either way, the most important thing about transistors is invisible at his first-order, isolated level of description and only becomes apparent when you connect the components together into a circuit. And then the equations become vastly more nonlinear. So I THINK I get why he didn’t regard the transistor as a fundamental component, even though electronics would be pretty dull without it.

  3. Alex says:

    Hi Steve:
    Thanks for the prompt response. I just saw the interview you did for FlagNews. Some of the things you were talking about like living in a simulated world in the near future reminded me of Jeff Hawkins’ work:

    Click to access Numenta_HTM_Concepts.pdf

    What is your opinion about his ideas?

    • stevegrand says:

      Hi Alex,

      You’ve done some great projects! Particularly enjoyed the cell-formation sim. I think I may have heard one of them speak about their model once, but I’m not sure.

      > Some of the things you were talking about like living in a simulated world in the near future reminded me of Jeff Hawkins’ work

      Heh! They reminded Jeff Hawkins of Jeff Hawkins’ work too! We swapped books a few years ago and were surprised at how similar our ideas were. But since then my perception is that he and his team developed their HTM model and have focused on it as a specific, pragmatic tool, while I’m still contemplating my navel and looking for bigger fish to fry. I disagree with him about robots – I think we need that reptilian brain and its behavioural interaction with the world in order for the neocortex to develop and be useful – that’s where I think he’s going down a more pragmatic route than I would want to (but then that’s why he’s rich and I’m not). And I also have a hunch that the way the cortex makes predictions varies quite a lot according to domain, even though the machinery that makes it all possible is fairly consistent across the whole cortical surface. I think maybe the uniformity of cortex doesn’t so much hint at a uniform memory model but a uniform proto-machine that can learn to predict different things in somewhat different ways. But that’s just my hunch and they might not even be different things, just different ways of looking at the same thing. I completely agree with him that the purpose of the system is prediction, but I see that prediction as serving a homeostatic purpose – servoing the brain to minimise the difference between the state it expects to be in and the one it wants to be in. That behavioral output seems to be missing from his team’s work if I understand it right. But it’s some years since I paid attention to it.

  4. Stark says:

    If you chaps have some time, I thought this video was pretty interesting. His neural network imagines what textual characters should look like.

    I’d go into more detail, but I’m still at work!
    Love to hear your thoughts. 🙂

    • stevegrand says:

      Phew! I understood a few bits of that, but 90% of it whizzed past me. But then classical Machine Learning is a pretty esoteric subject and not my field. It was definitely a departmental seminar for ML specialists!

      I like the fact that it doesn’t need labels, and especially that it’s a generative system. I like that it’s stochastic, too. It certainly seems to work well. It’s pretty abstract math, though – I’m not sure it tells us anything much about biology. The lack of lateral connections in the hidden layers is very unlike the brain, for a start. But then he’s not trying to understand the brain, so that’s fine.

      Great talk. Loved the jokes. Wish I was smart enough to understand the math! Thanks for the link!

  5. Stark says:

    Haha yeah, the terminology really kills me!
    It seemed very interesting in its uses. Perhaps it doesn’t work how a real brain works, but I’m wondering if it could be a work towards an machine intelligence? Obviously there should be more than one way to make an omelette, right?

    I kind of look at intelligence like Jeff Hawkins does, that it’s possible to create an intelligence without emotions or motives — just as a prediction agent. I’m wondering if a being is capable of creativity and consciousness with prediction alone?

    Obviously it needs a way to interact with its world and the ability to imagine cause/effect, but why don’t think you think intelligent could be black-boxed off away from the normal experiences an animal has, compared to say a database agent or weather predictor? Perhaps the difference is you want to make animal-like consciousness/sentience?

    I think this is the bit that confuses me when you say intelligence can only come within an environment where the creature has access to an interactive environment. What if this environment was just “all in its head”? 🙂

  6. Stark says:

    (Sorry for the spelling/grammer mistakes, I forgot to proof read after reading it)

    (And) Of course, it comes down to the philosophy of what Self and Consciousness involves!

  7. Stark says:

    Also, if you don’t want to explain it all again, as I’m sure other people have asked you — point me to a url. 😀

    (I’ve read your books too, so, maybe I’m just bumbling!)

  8. stevegrand says:

    Hey Stark,

    Well I think you could make a system that solves limited problems in limited ways without having a body and an environment, but I think the result would hardly deserve the name intelligence. Simple prediction of the next step in a weather system, given the current conditions, doesn’t count as intelligence in my book and certainly not understanding, any more than linear regression is intelligence.

    It’s basically the same as the Symbol Grounding Problem in classical AI.

    What does “conservative” mean? What does “hot” mean? All our internal concepts (which become the building blocks of our mental model of the world and the tools for us to think with) form a massive hierarchy. As you go down the hierarchy the concepts get more primitive (so “hot” is more primitive than “conservative”, and “conservative” is understood purely by association with a bunch of lower concepts (related to change or lack of it, for instance). Ultimately ALL of these conceptual towers have to rest on real, physical sensations. That’s why it’s so vital that young children get to play with sand and mix paints together – it’s from the primitives that they build out of real sensations and primary experiences that all understanding arises. Educationally speaking, if a child misses a critical class of experiences at the right time (insufficient social contact as a baby, say) then the whole of their mental development can be seriously disrupted.

    So if you don’t have a body, how do you ground your concepts? Sure, you can be fed primitive but still abstract data about, say, oil exploration and probably come up with some sensible higher-level thoughts about where to look for oil. But even if you analyse what a human mineral expert does when prospecting for oil, you’ll find that he/she makes enormous use of primary experience (concepts of up, down, viscous, hard, permeable and so on). Exploring the world with our hands is crucial to almost all aspects of intellectual development.

    Yes, we live in a simulation of the world inside our heads, but this simulation only comes into being, and crucially only assembles itself in a powerful way, because we have direct physical experience of the real world. Once you have this simulation it is perfectly possible to disengage from reality and think using your own imagination. In a sense that’s part of what consciousness is – it’s what occurs when we manipulate and react to this internal world – but you only have that imagination in the first place because you were able to build such a model from raw personal experience.

    That’s my view, anyway.

  9. Stark says:

    That’s very cool/insightful!
    So if I was fed years of raw personal experience via a simulated environment as a child, I could possibly pass as understanding the rules of whatever reality I was simulated in. In a sense, this is still a prediction mechanism perhaps?

    It does, however, appear that there are drives that motivate the agent to explore, play with the environment, and learn as many ‘rules’ as it can about reality. These drives could be as simple and low level as “I’m cold, do I remember a hotter area?”

    It’s the more complex drives that I wish I could understand, such as our needs to paint, or create. I guess it comes down to a drive to explore by manipulating the environment? I do understand that we create our imagination by directly manipulating our environments.

    Could it then be said that I ‘understand’ an object once I have an ‘imaginational’ model inside my head of it, and that I can then predict what would happen if I move it around/rotate/manipulate/transform it? (hopefully I can use the term predict here as a meaning for: if I cause this, this will be the effect)

    Very cool view, Steve 🙂

  10. Stark says:

    I think what it comes down to me is, what you consider using your “hands” in said environment, and what you consider the environment to be.

    If it’s say, a weather data environment, you as the agent should be able to push weather data around and manipulate it in ways that you can understand what would happen if the clouds moved to the east and curled up into a tight cyclone. (Possibly according to what your view of what it would mean to be an intelligent weather forecast prediction system)

    The problem comes down to weather (a typo, but a fun one) we know a how to simulate the weather in the first place for an agent to “understand” and “manipulate” its environment. Or, as the old prediction method goes, ‘teach’ it as much weather data as possible and hope it finds patterns in the data, without the ability to manipulate it directly.
    These patterns it recognizes, I guess, would be the manipulations it ‘understands’ or could ‘deal’ with if it tried manipulating its environment via imagination?

    I might just be jumbling ideas together, sorry. 🙂

  11. stevegrand says:

    I don’t think you’re jumbling them together – I think they’re all inextricable. That’s the point, really.

    You’re absolutely right about the drive to manipulate. I don’t believe being passive is enough. I think a newborn baby can learn very little about her world passively – it’s just a meaningless blur of moving colours. It’s only the discovery that she can CHANGE things, and that this is repeatable, which really triggers the bootstrapping process of learning. She has to discover that what she DOES affects what she SEES to even know that she exists. I suspect this may even be the root of the instinct to be creative. Babies are born scientists – they learn by experiment.

    As you say, in some sense you could get a “weather baby” to manipulate an anvironment made from a model of weather and learn that way (although to do that you would have to solve the very problem that your intelligent system is supposed to be solving, because you’d need an accurate model of weather!). But even then, just picking words from your own sentence, how does this system understand what “curled up” and “tight” mean? These are primary experiences that come from the real world. They become ANALOGIES for thinking about other things, and that’s the basis for generalisation, which is the basis of intelligence.

    Yes, I think you’re spot on when you say understanding is the ability to manipulate a mental model of something and make predictions from it. Somehow that mental model is produced by analogies that are formed from other analogies, all the way down until the lowest analogies are drawn from primary experience. The big question to me is, what form of representation does the brain use to build these higher concepts from lower ones? How does it construct its models? We can predict what a conservative politician might think about, say, global warming, because our concept of “conservative” is built upon simpler ideas and ultimately grounded in physical concepts like “resistant to change”. But the way these become linked and categorised so as to be powerful analogies is still a mystery.

    Incidentally, people are fond of talking about the great divide between art and science, but to me they’re both aspects of the same thing: Both reason using analogies. There’s a continuum of analogy, from loose metaphors at one end to formal mathematical models at the other. Artists tend to hover at one end of the spectrum and scientists at the other, but truly creative people can move freely up and down – they’re grounded enough to build real working models but free enough to explore fanciful metaphors and see the hidden truths.

  12. Matt Griffith says:

    Figure I better just start using my real name, but, this is Stark.

    Interesting! And I do agree that, though I am using concepts such as “tight” and “curled up”, these are analogies I am used to; but also, I do believe an agent in this simulated universe could make analogies based on things it perceives in its universe all the same. It may not specifically be called, “curling up”, but an agent could generate a ‘syntax’ of patterns within the hierarchy of it’s understanding/recognition for the changes it perceives in its reality.

    I’ve always wondered if we manipulate the world, or if the world manipulates us — as a child, what is that first defining moment where we question what it is we’re doing, and why we’re doing it?
    As a baby, when our parents flip us on our backs, do we begin to realize that it is not the world spinning, but we ourselves? It seems like we correlate different sense inputs at each new moment to try to understand if they are in fact, related to each other. My sense of gravity changes as my parents put me on my back. When I crawl, I can move forward, but I disgress…

    Ah, it’s all word games, I know — I just wish I knew the key ingredient that sparks something like this, it’s just a very fascinating subject!
    I’m positive it may just be the simplest concept ever, too! Instinctive drive? Genetics? Caring Parents/Guardians?

    To me it seems like cognition and intelligence is entirely related to this drive to just push and collide into the boundaries of the laws of this universe. As we learn, we are always pushing the goal post further ahead, never believing that there is actually a wall to hit. 🙂

    As a child, our internal model of the universe doesn’t even include us, but as the world around us manipulates our sensory inputs, boundaries are pushed outward. As we grow up, these boundaries begin to stretch out in different dimensions as it were.

    The way I always envision it is like a string of interconnected nodes, like a neural network that slowly adds growing concepts to one another. Another way to look at it is like an automata for a grammer parsing engine for a compiler or interpreter. All states lead back to the start statement, and end on “what we know so far about this universe”.

    It would be amazing to find out that our brain’s connections are actually, in fact, this internal model of the universe, spreading out and connecting what it knows about us. All nodes point back to “Self”. 🙂

    Ah, if only I could tangibly simulate all this with a dynamic and stimulating universe.

    Yes, and I also agree at what it means to be both a scientist and artist. I could easily consider myself both, I compose music in one sitting, and consider the universe in the next! 🙂

    I do enjoy this conversation immensely! Thank you for your time, Steve!

  13. stevegrand says:

    Hey Matt! Nice to know your name – I never feel comfortable calling people by their aliases (although maybe now’s the time to admit that Steve isn’t my real name – I’m called Stephanie, I’m nine years old and I live in Brisbane. Just kidding!)

    You may be right that an intelligence could infer the more general concepts that we call “curling up” etc. even from a limited environment like a weather model. When my son was tiny he amazed me by using the word “platform” to mean something that supports something else, because I’m pretty sure he’d only ever heard the word used to mean a thing you stand on at a railway station. Somehow he managed to see the wider meaning. I’m just not convinced that you could BECOME very intelligent in such an impoverished system, especially if none of it MEANS anything to you – your comfort and survival don’t depend on it. But I guess that’s for us to prove, one way or the other!

    > I’ve always wondered if we manipulate the world, or if the world manipulates us — as a child, what is that first defining moment where we question what it is we’re doing, and why we’re doing it?

    Ah, now that’s very deep! Some of us never question it at all, which is why we foolishly continue to believe in free will and gods. I mean, if god does something, was it for a reason? If not, then he’s just being random and that’s not very impressive. But if so, then he’s as trapped as the rest of us! Even his effects have lawful causes – he did it “because”…

    But at a less philosophical level it sounds like you’re a cyberneticist like me – interested in circular causality. It’s perfectly true to say that the general of an an army is “commanded” by his troops – what they do detemines what he has to do, just as much as his orders affect them. All this notion of linear causality and top-down control is a pathetic myth that we should have grown out of years ago. Everything affects everything else in a mass of loops. A friend of mine eloquently used to describe it as “the interconnectedness of all things” (or, in a different and rather more popular book, “the Whole Sort of General Mish-mash”)!

  14. Matt Griffith says:

    Yeah, generally I don’t use my real name online, but I’m just used to pre-social networking internet. 🙂
    Nice to meet you, Stephanie! Haha, anyway. 🙂

    The child mind is an amazing thing to observe. The manipulation of concepts, thoughts into words, words back into concepts, and so on is just such an amazing thing to watch!
    I will always say the child mind is the best first step in the creation of intelligence of some form.

    I don’t at all disagree that there is a great fundamental challenge for a “weather baby” to work without already having the data required for prediction! The more I asked myself about it, the more your point has made sense. You took the words right out of my mouth/brain. I easily take back my original statement! 🙂

    Here’s a bit of a rant:
    The more I think about the cyclic causality and the manipulator/manipulatee hypothesis, I realize that there is no one simple answer. But perhaps a possibility would be, “you are the environment, and the environment is you”. In other words, perhaps what I’m trying to get at is that, in order to live in say, a simulated environment, and to understand that environment, the environment needs to ‘take’ things from you, energy, matter, and sensitivity. There is a cyclic continuum of input and output.
    For a cognitive/conscious entity, it’s almost as if the universe boot-starts you by merely existing and manipulating you for energy.

    You begin to recognize patterns in how the universe manipulates you, but what makes you wish to stretch your body and manipulate the environment yourself? Is it just random electrical impulses in your growing nervous system when you as an infant were in the womb? Perhaps the kicking in the womb is these original boundary-learning manipulations for a new mind, and they grow onward ad-infinitum, I nudge you, you nudge me, I nudge the universe, it nudges me back. I could rant on for ages, sorry! If you hadn’t noticed, I just want to infer these things in ways that might spark your own imagination too. 🙂

    I definitely consider myself a cyberneticist. However, in many senses, I’m a very bottom-up person, all my concepts and ideas are composed from building blocks (Did you play with LEGOs as a kid? Haha)

    It goes the same for how I envision simulating a reality that can be manipulated — it must be built from building blocks itself. I do NOT wish to simulate our own reality, but a reality with the same kind of “mathematical soundness” and “dynamic” experiences ours can create. Only, far simpler, and manageable within the confines of our computers.

    The way I envision life — if I were to simulate a lifeform, all its parts *have* to work too. There should be a reason it has legs, and a heart, and so on. Which, to me, means that the lifeform should be no different in simulation than the rest of the environment.

    The likely path then would be through evolutionary means, and so I must have very tiny building blocks. Which, for our computers, does not seem reasonable at this stage. This is the depressing bit! :/

    I must stop myself to say that that isn’t to say that we cannot create higher-level intelligence without the bottom-up approach, it’s just one of my desires, to be the deist God. 😉

    “Everything affects everything else in a mass of loops”. Yes, your book, Creation, highlighted this point very effectively to me. It was quite a moment for me to realize I had been looking at this concept so many times on my computer screen with automata, digital logic, let alone in the real world! (The day I saw my first real tornado was both an amazing and frightening thing to behold, the beauty and elegance of all the interacting feedback loops)

  15. Daniel Mewes says:

    Although it’s about a different topic,
    New Scientist just published another news under the title “Blindspot shows brain rewiring in an instant” http://www.newscientist.com/article/dn17464-blindspot-shows-brain-rewiring-in-an-instant.html

    I wonder if they are just hunting for headlines?
    This “news” seems to be complete non-sense to me. Or I just do not understand it…

    If I understand it right, it says that with one eye patched, the brain uses a different method to “fill up” the eye’s blind spot than it does if both eyes’ data is available.

    I wonder what the point is about this? Say I hold one cube with both of my hands, which is just too big to be wrapped by one hand alone. I will be able to recognize that it is a cube by its shape, because I’m “filling up” the missing information of my right hand with the information from my left hand. If I hold it in one hand only, I might still be able to recognize it as a cube or as a cuboid at least. This is because my brain “fills up” the missing data now by extrapolating on the edges.

    I think combining information from different sources and extrapolate if some information is not available is what our brain always does. The article states “Our brains can […] to compensate for a break in incoming data, suggesting they are even more flexible than previously thought”. Hmm, I wonder what they were previously thinking about how (un-)flexible our brains are?

    Later in the news it says “[…] that it must be due to the brain redirecting signals through pre-existing circuits rather than forging new connections”. Wasn’t this supposed to be about “rewiring”?

    The article also uses the formulation “[…] the neurons […] compensate[d] by stealing data from neighbouring neurons”. Yea, always those criminals in our heads… 😉

    Seriously, I don’t have the impression that the New Scientist is very scientific actually…

    But perhaps most articles are better, I don’t know.

  16. stevegrand says:

    Hey Daniel,

    Thanks for the link. Hmmm…

    I’ve tried to replicate the experiment but I can’t get it to work. They have a good blindspot demo on their website at http://web.mit.edu/bcs/nklab/media/blindSpotDemo.shtml

    It’s easy to get the face to disappear into my blindspot but I’ve tried presenting various shapes at various sizes near to the face and I don’t see any extension develop. However, I know roughly what they mean and I thought that was already a well-established fact. We certainly know that temporary scotomas “heal” themselves in the space of a few seconds and our visual field returns to looking seamless even though parts of it are actually blind.

    I think the journalist (and possibly the scientists too!) are using their words a bit too loosely and/or making the assumption that changing the receptive fields of neurons is equivalent to rewiring. I don’t think it is.

    It seems to be generally assumed that visual cortex is quite tightly wired – that each neuron is receptive to a tiny fraction of the visual field. But I don’t think this can be the case (I base my objection on some inferences about the distribution of orientation-selective cells). It seems to me that the visual information entering cortex is immediately and substantially “blurred” (convolved), so that each neuron receives signals from a wide angle on the visual field. The deeper you go, the more blurred the signals get, until soon each pixel in the input has some impact on almost every neuron.

    It’s very hard to visualise computation in such “convolution space” but I’ve a feeling that’s what the brain really works with, not nice neat, sharp images. I spent some time trying to come up with convolution computers – that’s what got me interested in holography – but I don’t have any clever solutions yet.

    So, neurons near the blindspot will receive inputs from a wide area around it, as well as from the opposite eye. Normally the valid signals from the opposite eye will hold sway, but if you prevent them filling in the missing image, signals that are ALREADY present from other parts of the visual field will soon start to have an influence.

    No rewiring needs to take place for this – just some retuning of synapses. And what it demonstrates, I think, is that the “wiring” of the cortex is “held in shape” very dynamically. It relies on receiving valid data from the outside world to hold it together. So as soon as some normal visual signals go missing the system readjusts itself to respond to the signals that were always there but had been suppressed by the “proper” ones.

    I guess that’s what’s really happening. The brain is very plastic, but not by “rewiring” itself, just by adjusting the volume of signals that were always there but had been focused out by lateral inhibition. It sounds like a clever self-repair mechanism but personally I’d say it is really a consequence of the way that normal visual stimulation “tunes” and “focuses” the brain to respond to some signals and not others. Remove those signals and others become revealed. But the wiring that they arrived along was there all the time.

    Does that make sense? It’s tough for New Scientist journalists – I’ve known quite a few of them. They have to cover a wide range of subjects and they have to accept what they’re told by researchers who are trying to simplify things on the phone, and often are themselves victims of unquestioned assumptions – especially about the brain!

    Incidentally, I have a scotoma (damage) in my left eye, although I can only detect it if I move something small into that spot. But sometimes I can see strobing patterns – a certain category of swirls – which I take to be my visual cortex (or maybe retina) “freewheeling” in the absence of visual data. Interestingly, I once wrote a neural network simulation based on loosely coupled “springs” that generated almost exactly the same patterns!

  17. Matt Griffith says:

    Hopefully I didn’t give you too much to think about! Sorry if so! Haha 🙂

  18. stevegrand says:

    Way too much! 🙂 And I’ve got other people telling me highly technical stuff about embryology, abiogenesis and cell signalling, kids to help with their robotics projects, friends to help with their websites and TV shows, an article to write for an encyclopedia… It’s great – I have every excuse in the world for why I’m not getting any code written! It’s not my fault at all. Honestly, it’s not…

    Still, excuses don’t pay the bills. I guess I’d better make some progress today.

  19. Pingback: Memristor : Le composant manquant ? | traffic-internet.net

    • stevegrand says:

      They do sound interesting, and arranged as a matrix they could potentially be used to simulate synaptic weights and dendritic migration, so when we finally get some clue about the computational architecture of brain circuits we may be able to use memristors to implement NNs. But it still seems to be stretching it to say they offer any solutions to AI or neuroscience in themselves, slime moulds notwithstanding. Thanks for the link!

  20. Ben says:

    Hi Steve – first off, since comments are disabled on your most recent post, I wanted to let you know that at least one person is still listening! I haven’t made a comment since your very first post (Harmony of the neurons), but I’ve enjoyed reading your blog in the intervening time, albeit silently. Anyhow, I saw an article on MLU (http://machineslikeus.com/news/cat-brain-step-toward-electronic-equivalent) that reminded me of this post, so I thought I’d see if you had any reactions to it. I won’t pretend to have read the source article, but based on what I read in the summary article, it sounds like a bunch of hooey to me. After all, it’s trivially easy to build a simulation that, for instance, shows spike-timing-dependent plasticity–it’s something my labmates and I do on a daily basis. Perhaps you’ll have insight into why having physical components that show this sort of behavior are superior to their lowly simulated counterparts, and why this therefore naturally leads to electronic systems as smart as cats. As far as I can see, even if I gave a researcher perfect electronic replicas of neurons, complete with every mechanism you could want (genetic and epigenetic behavior, all forms of LTP and LTD, and all the other intricately interwoven processes that operate from scales of milliseconds up to months or longer), it’s not as though a cat brain would emerge therefrom automatically. I suspect I won’t meet any disagreement here, but it seems like the organization is at least as important as the constituent parts. Anyhow, just wanted to see if you had any new thoughts on the matter, and remind you to post more updates!

    • stevegrand says:

      Hi Ben,

      Yes, sorry – there was a reason for the suppressed comments. It shouldn’t have to happen again. Thanks for staying tuned! Actually I’m about to start blogging about a design for an artificial brain for a game I’m just getting back to writing.

      Hmm, yes. At one level I’d be delighted to see physical implementations of neurons, using memristors or otherwise (I don’t see anything particularly special about memristors). Hardware is so much faster than software, and in principle there are spatial things that we might be able to do relatively easily in hardware that are very difficult in software (e.g. an electrical current can find the least-resistive path from A to B in a physical medium without effort, and a laser can perform a 2D Fourier Transform instantaneously, but in software these things take some computing). But so far we have so little clue what these neurons should look like that it’s rather a sideline task developing technologies to make them possible. And a brain made from simulated neurons has the same ontological status as a physical one, so it’s nonsense for anyone to imply that AI will only progress when we have the magic ingredient in the form of hardware neurons (as this guy did in an earlier article).

      And like you say, just having the components means nothing whatsoever. It’s not the properties of the components that matters, but the properties of the right CONFIGURATION of those components. This seems to be a deeply ingrained problem inside the minds of physicists. Post-Newtonian reductionism and the mathematics that derived from it genuinely seem to lead them to believe that the properties of the whole must be immanent in the properties of one or more of the parts, which is just sheer nonsense. (I’m supposed to be writing a book on this very subject, because I think it has consequences that extend well beyond science, but I’m kinda stuck getting the words out). The trouble is, the general public (and DARPA, it seems) really do believe such things. In the case of AI researchers I put this fallacy down to desperation; in the case of physicists I put it down to arrogance! The general public knows no better.

      So yes, I totally agree. Thanks for the link. And thanks, too, for the prompt. I need to get back to blogging again and then maybe the words will start to flow once more!

  21. Carlos Acosta says:

    Hi Steve,

    I also encountered problems when I tried to enter comments on your last blog. I can’t speak of other readers, but I, for one, very much look forward to hearing more specifics about your artificial brain design and about your new book as well.

    All the best,

    Carlos

    • stevegrand says:

      Hi Carlos! OMG! When is/was Tucson??? Oh no, it’s over already! My life has been so complicated and rushed lately. I just couldn’t fit consciousness conferences in. I did want to meet up with you but it’ll have to happen another time. I’m so sorry. How did your poster go down?

      But as for the brain design, I promise I’ll start blogging about that right away. If I don’t start now I never will. It’s time I got back to my work again.

      I hope you had a good time in Tucson and feel invigorated.

Leave a reply to stevegrand Cancel reply