Morality and the prefrontal cortex

Phew! I should have had the good sense to write up yesterday’s post as an essay for Machines Like Us, because there it could have stimulated unmoderated debate, whereas on my own blog I’m always the one holding the talking stick. I’ll see if Norm can add this post to yesterday’s, which he already put on his site, and then if you have other comments I’d encourage you to post them there.

As it is, thank you all for the excellent thoughts. I’ve decided to highlight some of them in this separate post, because not everyone reads comments and I’d hate for my own first thoughts to seem like the end of the matter. I don’t know the answers. That’s why I put forward my own working hypothesis, to be debated.

Anyway, here are just a few of the things people have said that I’d like to come back on. If you’re not in a hurry, you should read their comments on yesterday’s post, partly because they’re interesting and partly to be sure I haven’t taken their words out of context.

Vegard: I think that the problem is not that the moral philosophy not tied to religion is non-existent, I think the problem is that most people don’t _know_ that it exists, where to find it, that it is not usually taught in schools, etc

Yes, I suppose it may be true that most moral philosophy over the past three hundred years (and of course much that occurred among the ancient Greeks) was done in a secular context. But it remains true that the biggest objection to atheism that I’ve heard from the Christian man-in-the-street is the implicit or explicit assumption that atheists have no morals, or that our morals are vague and suspect, whereas the morals they’ve been given by their religion are absolute and self-evidently correct. I think more people would be willing to let go of a belief in the supernatural if they weren’t so scared of being thought amoral, or if they had a clearer idea of a morality that isn’t based on ancient teachings and threats of damnation. Non-believers have gained a voice in the past few years but I don’t think we’re yet providing replacements for all of the functions of supernatural religions.

David: Reading this blog entry, it was almost like reading an entry of a buddhist!

This is an interesting point, because Buddhism is to some extent a bottom-up, self-organising philosophy/religion that got going long before the Internet. Modern secularism seems to have similar emergent qualities and I’d expect its consequences in terms of morals, ethics and the like to be very much in the Open Source, collective mould. Speaking as one of many who think that supernatural explanations are counterproductive and often the cause of serious distress, I think we need some solid foundations – some good alternatives to the edifices we wish to replace. But at the same time we have to watch out that we don’t replace one top-down dogma with another. It’s a difficult path to tread – I was worried that my own suggestions for a basic moral principle would sound like preaching, which was not my intention. But the Internet, like Buddhism, gives us some good models for how to come to a consensus without leaders; an organisation with no organisers.

Terren: I do have a problem with one aspect of your point of view. You wonder whether the relative guilt of the drunk driver who kills, versus the one who doesn’t isn’t equal. You make a similarly counter-intuitive comparison involving the abused-murderer and the non-turn-signal-using driver. In both of these cases you dismiss the outcome of the act, focusing only on the intention of the act.  … I think that would be okay if we could know what our true intent was in each moment. But most of the time we act and then justify our actions later in terms of a model of ourselves, grounded in some context, and that model may or may not fit with reality. Addicts are good examples where the model doesn’t fit.

This is a tough one because it bears on free will and responsibility for our own actions. My own position on free will is that there is no such thing in an absolute sense, but that we must believe there is and act accordingly, simply on the basis that any society that doesn’t will soon decay and dissolve. Of course I shouldn’t say we “must”, because I’ve just said we don’t have any choice! Either our society will find a happy medium between believing people to be culpable and forgiving them for doing what anyone would inevitably do under the exact same circumstances, or it won’t. If it does, we’ll prosper and if it doesn’t we’ll die out.

I think there are two distinct levels of description: at the physical level there is no such thing as free will – we’re all just atoms bumping into each other according to immutable laws. But at the level of description in which we consider ourselves, personally, as free agents in charge of our destinies, the concept of free choice does make sense and as a consequence we have to accept blame and correction when we “do wrong”. Unfortunately we use the same language for both these semantic levels and often confuse them. Plus we simply don’t HAVE any language for describing non-teleological things. It’s tricky and more than I can make sense of in a few paragraphs.

I’ve been tangling with some of the often quite distressing issues you raise for some time now – when should you hold someone responsible for their actions? I don’t have good answers and I’m not sure there are any. I think in the end it comes down to drawing a personal line (but as you say, recognising that it’s a very fuzzy one). If someone hurts you because they’re suffering from a temporary stress-induced psychosis, should you blame them? What if it’s due to lifelong schizophrenia and they have no clue they’re behaving oddly? What if it’s a serious personality disorder and therefore not at all the way they would wish to be, yet unfortunately is a part of their whole makeup? What if it’s nothing that a doctor would consider pathological at all, it’s just that they grew up as an unpleasant person? What if it’s because they’re under the long-term influence of drugs that they took to deal with some undeserved pain in their lives? What if it’s just the beer talking? At the level of physics we can say that ALL of these people are just acting as they inevitably would – as you yourself would if you’d been born with their genes, had their upbringing and found yourself in their circumstances. But at the level of description where we use the term “free will” we have to come to a decision. I really don’t know the answer, and believe me I’ve had to agonise over it! All I know is that societies which find the right balance will prosper when those that don’t won’t. I hope we’re in one of those that do. Debate is one way to encourage the emergence of the right balance, I think; dogma is less likely to succeed.

But I still think the focus should be on intention (and awareness). Focusing on the outcome seems to me to be arbitrary. All of us do minor “bad” things all the time, as you point out, and mostly we get away with it. But some percentage of us will be unlucky. I can’t see any logic that justifies punishing the unlucky ones and not the ones who got away with it. Clearly it would be stupid to punish everyone who fails to indicate before changing lanes with a stiff prison sentence as if they’d killed someone. But the person who did kill someone did nothing that was more wrong – he just failed to indicate too. So it seems to me that the mistake lies in the harsh punishment, not the soft one. I offer it as a warning for us not to judge others too harshly. When it comes down to it, none of us can really help what we do, we just have to believe it sometimes in order to function.

Overall, the best I can come up with (and I’m not happy with it because it implies some vestige of “real” free will) is that we should judge people on the basis of how EASILY they could have avoided calamity. By the time a government declares war, they almost always find that they had little choice. But a wise and diligent government would have seen it coming and tried not to go down that road. That’s their job and we should blame them for not acting at that early point (even whilst recognising that they had no choice even then!) I think it makes some kind of tortured sense to hold people to account for how easily (whatever that means) they could have altered the course of history. That’s why I blame the person who failed to indicate (an easy thing to correct, with an easily foreseeable possibility of severe consequences) more than the murderer (who we presume could have done little to stop the cancerous progress of this relationship earlier, and eventually found herself in a position where she felt she had little choice but to do something terrible). But it’s a pragmatic solution and I welcome new insights.

Vegard: Charles Fried argued that it is wrong to kill and lie because we suppress another person’s ability to make their own choices and live their own lives. He also writes that “our first moral duty is to do right and avoid wrong”

I haven’t looked it up yet, but isn’t that begging the question? Of course we have a moral duty to do right – it’s the definition of “moral duty”. But what does “right” mean? That which is moral? It seems a bit circular. I’m suggesting that the right thing to do is the thing that makes people happy or avoids causing them distress. It’s then our moral duty to do that. But I’m sure Fried has a better argument than it seems and I should look. As for it being wrong to “suppress another person’s ability…”, yes, that’s the argument for freedom. I always assumed that happiness implies freedom – denying someone’s right to make choices makes them unhappy. So maybe optimising happiness is enough of a guideline. But maybe freedom needs a specific emphasis? I was just trying to get to a minimalist statement of best intent, and I’d hate to have to start adding additional clauses. It was the fact that we don’t KNOW how to make people happy that I liked about my suggestion: it forces us to think about each case on an individual basis, instead of blindly following rules. However, your next point (below) is a biggie!

Vegard: The theoretical problem with utilitarianism is that it allows for doing bad things because they have the best outcome overall — the McClosky example [1], simplified: A sheriff has the possibility of framing an innocent man which the public believes to be guilty, in order to prevent a brewing mass riot (which would lead to many more victims than just the one innocent man).

I have to admit that this was the thought going through my head that made me write the post in the first place. I won’t go into details but I’ve had painful personal experience of facing such a quandary. I tried so hard to do what was right – what would make people happiest and minimise the distress, but it was a zero-sum problem and I had to choose,  deliberately and knowingly, to hurt someone whichever option I took. And on reflection, three years on, I’m not at all sure that I did do the right thing. I may have caused people I love more distress than I would if I’d made the other choice. But what can we do but look as far into the future as possible and try our best?

Anyway, back to your specific example: It seems like framing the innocent man is self-evidently wrong, because that’s what the example is set up to suggest. But is it? Perhaps it is the right thing to do? I just don’t know. I think in practice the sheriff wouldn’t know either – he wouldn’t be able to judge in advance whether doing a bad thing to this person would actually result in the best outcome. There are too many unknowns. And so on that basis it sounds like a risky (and hence morally shaky) idea to tell lies and ruin one man in the HOPE of saving many.

Taken on a longer timescale, lying and perverting justice like that are almost unquestionably bad things. If everyone did it then society would quickly become lawless, anarchic and the total sum of happiness would decline hugely. The sheriff would be setting a bad precedent and taking a serious risk in assuming that his action stands alone. He should consider the longer-term consequences and perhaps decide that honesty is still the best policy, even though it will result in more distress over the medium term.

I don’t have answers, but that’s kind of my point. Choosing to do something bad for the greater good is not in itself abhorrent – we all tell white lies sometimes, with the best intentions and often the best outcomes. The default, of course, is to do what you’re expected to do – stick to the job description, base it on loyalty, or palm the problem off on a superior. But these are just cop-outs, ONCE you realise that the choice is there. To pretend you hadn’t thought of it is mere cowardice. Once the idea is in your head you’re responsible for coming to a decision.  And I think it is better if you think hard about that decision and stand up to be counted, instead of relying on dogmatic formulae to do your thinking for you. And what better basis than trying to make everyone happier? Isn’t that “doing what’s right”?

What would a religious person do? They’d pray. They’d wait for a little voice in their head to tell them what to do. I would suggest that this little voice is their subconscious, and I have a great deal of respect for the subconscious – it is so much better at juggling large numbers of uncertain variables than formal logic in the prefrontal lobes can handle. So maybe the little voice would have the best solution. But the problem is that they believe that voice belongs to God. They therefore absolve themselves of responsibility and trust “God’s word” implicitly. They don’t question it to make sure that they aren’t just acting emotionally or irrationally, because it’s “wrong” to question God’s will. Even if the solution doesn’t seem quite right, they’ll account for it by saying that “God moves in myserious ways”; “It’s all part of his plan”. Now, as much as I have respect for the power of subconscious thought, I think it still needs to be checked against reason. And I think we need to recognise our own responsibility. When it comes down to it, the decision is ours, and there are no “right answers” that work for all circumstances so we have to adopt a compromise; weigh everything up; make a call and hope we get it right. If we got it wrong we should learn from that and try to avoid making the same mistake in future. I think that minimising the number of formal “commandments” encourages that, and certainly it pays not to leave unquestioned a voice that can say anything it wants to.

Tim: As [our 2yr-old daughter] has become able to understand more of what we’re saying to her, the key to it all seems to be empathy – if we can get her to empathise with the child whose toy she just took then our job is done. … Of course what we’re really doing is harnessing the work of evolution, since it hard-wired us for empathy when we became social animals. Who are we to argue with the conclusions of evolution?

That’s a nice thought, which I think I’ll finish on, because empathy is the crux of the “do unto others as you would have them do unto you” philosophy credited to Moses and then Jesus, which in turn is consistent with the idea I highlighted, of trying your best to make people happy. All of these in turn are expressions of the rather more Hippie concept, “All you need is love”. And that ties several quite distinct aspects of our brains together – the ability to empathise and place yourself in someone else’s shoes, the ability to reason and see possible long-term implications of your actions, and our emotions, from which we get compassion, sorrow, guilt and all the personal rewards of making someone else happy.

So I think we all know what we have to do (whether religious or atheist); we just need to stand up and say it – not in the sense of “thou shalt”, but as in “I will”. We need to formulate it in a somewhat less “amygdaloid” and ineffectual way than the Hippies, but not nearly so “prefrontally” codified as Bentham’s attempt at an ethical calculus. We need to devote ALL our brains to trying to make each other happy, and we shouldn’t need a god to tell us to do it.

Advertisements

About stevegrand
I'm an independent AI and artificial life researcher, interested in oodles and oodles of things but especially the brain. And chocolate. I like chocolate too.

16 Responses to Morality and the prefrontal cortex

  1. Nicholas Lee says:

    Dear Steve,
    It might be an interesting exercise to have a simulation (computer game?) in which the artificial creatures have as part of their specifying genome a “social interaction strategy”. (This character trait could be part nature and part nurture if parents can indoctrinate their offspring).
    One random “social interaction strategy” might be “do unto others as they do to you, but take sanctions against those who don’t do this”. Other random strategies might be “give charity to anyone poorer than me” or “might is right, forcibly take resources from anyone weaker than me”.
    A population of these simulated creatures could then evolve and the social dynamics allowed to play out over many generations. Successful strategies would cause likeminded groups of creatures to gain the resources to support more offspring that would be indoctrinated in their group’s moral code.
    It would be very interesting to see if you got an optimal ethical strategy, or perhaps you would just end up with endlessly warring religious factions that where devoted to their particular moral code but hated all out-groups.

    Kind Regards,
    Nick Lee

    PS: I predict a great schism of the unified reformed church of the latter-day Norns. 🙂

    • James Brooks says:

      A very, very simple version of what you said if found in the research on optimal play in prisoners dilemma.

      Two players select to be greedy or not. If one is greedy it gets 10 other gets 0. If both are greedy they both get 0. If both are not they each get 4. This repeats for 1000 games so you can learn about how the other player works, winner is the one with the highest value.

      Optimal strategy was often found to be tit-for-tat. You do un-to the other as they have done to you.

      • Nicholas Lee says:

        James,
        Yes, now that you mention it I did study the prisoner’s dilemma at university back in 1989. Each student wrote (in Modula-2!) code to implement a different strategy and then they were all run on a server which evaluated them against each other over a large number of games.
        I won the competition. My winning strategy was to trust the other person unless they betrayed me once and then I never trusted them again.

        I think that the same principle of competing strategies on a server could be scaled up to simulate the some of the intricacies of the different paradigms found in human society.

        Nick Lee

  2. stevegrand says:

    Hi Nick,

    Yes, that’s a great idea! You’d have though someone had already done it, but although I’ve come across vaguely similar game-theoretic agent-based sims I haven’t seen anyone test ethical approaches like that. Shame I don’t have the time to code it. Any offers, anyone?

    Not sure what the fitness test would be. If maximising happiness was to be tested it might be wrong to abstract happiness into a simple survival measure or to model it purely in terms of wealth. For instance I feel happy hiking in the mountains, but a) I can do it at nobody else’s expense and it requires no transaction, and b) in the short term it’s painful and in the long term I doubt it has major survival value (not least because I’ll never have any more children). It does lower my stress levels and help my heart, but that’s not why I do it – I just want to be happy. Some people are happy smoking, and that atually reduces their reproductive success. I suppose in some ways happiness has been perverted from its original purpose.

    Dammit! You almost sucked me into writing it then!

    Oh, btw, the Norns have already given up on religion. Their god stopped answering their prayers way back in 1999 🙂

    • Mark Kotanchek says:

      Axelrod had a keynote a few years ago (I think it was GECCO in Seattle) wherein he showed a fairly simple simulation where it showed the benefits of an US-vs-THEM cooperation/competition strategy. Religion is a pretty good and easily developed “US” criteria.

      • stevegrand says:

        Yeah, there have definitely been various cooperation/competition models, and they must provide some insights into religion and factionism. But I’ve not seen one that explores multiple strategies for ethical interplay, beyond the prisoner’s dilemma level: Do you try to maximise happiness for all? Do you maximise it for all who are alive today? Do you maximise it for those you feel loyalty with? Do you hold a set of absolute moral rules and then stick to them regardless? Etc.

  3. Ian says:

    Agh! Now *I* want to program that!

    • stevegrand says:

      Go to it then! (But I rather suspect you have some code you need to get finished, just like me)

      • Ian says:

        Hah, indeed. I’ve always got several projects going at any one time, and work (and the horrid hour and a half commute in either direction!) tends to suck up my free time. 😦

        I’ve always been interested in implementing a social simulation of some kind, though, especially of the “open world game” variety… there’s a lot of ideas to explore there.

      • stevegrand says:

        You have two hands – one to steer and one to code! 🙂

        It wouldn’t actually take long to write this by the look of it. There’s no visualization needed for a minimal system. But I’ll leave that to you! I daren’t allow myself to be distracted (he says, after unnecessarily writing long blog posts and delving into moral philosophy…)

      • Ian Morrison says:

        No visualization needed? Minimal system?

        I don’t think you know how my brain works. 😉

      • stevegrand says:

        🙂 Hmm, I think I’m getting the picture…

  4. spleeness says:

    Have you seen the hilarious youtube video “If atheists ruled the world”? I absolutely love it. 3 young men read comments taken directly from a Christian fundamentalist website. One quote: “I myself would have killed many many times if it weren’t for religion.”

    It’s sad that some people actually think they need religion in order to be moral.

    • stevegrand says:

      That’s wonderful! Thanks. I’ll spread that around.

      I rather liked: “Several million years for a monkey to turn into a man? Oh wait, that’s right – monkeys don’t live several million years…” Who can argue with logic like that?

  5. Nicholas Lee says:

    A graphical game for competing tribes of simulated alife creatures would be great.
    Teach your tribe how to behave and see if your world view is more sucessful at propagating than that of other players (via the internet). Each player gets to be the ‘moral code giver’ (moses style) for their tribe of virtual followers.

    This could become the Creatures-3D game i’ve been waiting a decade for! I think that using the DirectX-10 graphics engine would be nice as the Norn’s fur would render beautifully. 🙂

    We might need an updated marketing tag-line for the game as the my original ‘Creatures’ game box proudly proclaims “Nature vs. Nurture at 60 Megahertz”. 🙂

    • stevegrand says:

      Hah! There you go again with your tempting and teasing… But no, I have bigger fish to fry. I think you’ll probably like it though. I promised to blog about it but instead of important things like computer games I got bogged down with trivia like moral philosophy. I guess I’d better get my act together on that pretty soon.

      > “Nature vs. Nurture at 60 Megahertz”

      Aw, how QUAINT!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: