Ethical rules for robots?

Prof. Noel Sharkey was interviewed in the Independent yesterday, raising concerns about the dangers of children and the elderly being left in the care of robots for too long, thus lacking human contact. The article brings up Asimov’s laws, and although Noel doesn’t advocate programming such rules into the robots themselves, he does think we need an official set of guidelines for the use of robot carers.

I remember having a conversation with Noel about such things in my car once. I don’t think I agree.

It seems like needless scaremongering to me. Yes, one day robots might be useful enough to look after the kids for a while but we really haven’t reached that point yet, despite the hype. Anyone who uses a present-day robot in such a way is self-evidently irresponsible and culpable. Some people think that a packet of candy is capable of looking after their children while they go off partying. Some people think a nursing home can look after granny so that they needn’t be bothered with her any more. But we don’t blame the candy or the nursing home – we rightly blame the culprits. 

We can’t easily legislate for that, any more than we can set guidelines for the safe use of candy. In the case of guns I can see the logic – a gun is exceptionally dangerous, it can be misused very easily and at little risk to its owner, and it is specifically designed for killing. But stupid people will always do stupid things, and we can’t set guidelines for every single item that they might misuse in the process.

Asimov’s laws are a case in point. He had to extend them because he realized they were incomplete and contained dangerous loopholes. It wouldn’t be difficult to find loopholes even in his extended set. The harder you try to legislate, the more loopholes you create. I’m all for reasonable curbs on irresponsibility, but specifying the rules down to the letter just makes people feel that sticking to the letter of the law permits them to flout its spirit (see this astounding article for an example).

I accept that robot carers are DESIGNED to supplement human care, and hence could be misused without much thought. But I think the responsibility for this should be exercised individually, just as it is for other household tools. Any new technology requires acts of responsibility from its creators, its marketers and its consumers. But to call for guidelines now is surely going to cause more trouble than it prevents. Society is full of negative feedback loops – whenever something new comes along, the old adapts to it. Trying to legislate for the new on the assumption that nothing else will change can be a dangerous mistake.

Let’s at least wait until robots are REALLY being used and abused in this way, rather than adding to the already quite hysterical fear of intelligent technology. Otherwise we’ll never gain the benefits because people will be scared off.

More importantly, let us, as researchers, exercise our own intelligence and responsibility, rather than expecting a bunch of lawyers to do it for us. As a case in point, I remember seeing a robot designed to help someone with paraplegia feed himself. But the robot was facing the user, making him into a “patient” and the robot into his “carer”. Being fed by a robot has surely got to be demeaning. All the idiots had to do was turn the thing around so that the robot’s arm was positioned where the human’s arm would have been if he were able-bodied, and he would have been “using a tool to feed himself” instead – a far less insulting prospect. Researchers should think about these things, not rely on legislation to do it for them.

Military robots are quite another matter. When one army can attack another without any risk to its own men or those of its enemy (since we can presume that both sides will eventually have the same robots), then the only human casualties will be innocent bystanders. Somebody needs to be working hard on figuring out the consequences of all this for world peace. But a namby-pamby set of guidelines isn’t going to cut it in this arena.

Incidentally, my position on Asimov’s Laws is that any robot capable of keeping to them must, of necessity, be capable of breaking them. Therefore they are useless. Discuss.

Advertisements

About stevegrand
I'm an independent AI and artificial life researcher, interested in oodles and oodles of things but especially the brain. And chocolate. I like chocolate too.

6 Responses to Ethical rules for robots?

  1. Noel Sharkey says:

    Steve,

    I fear that you are slipping out of date with your reading. My science article is so short that it is more like a flier for the ethical problems. All of these issues are not scaremongering about robots. Robot to me are just simple machines that are used a s tools by humans and so it is the humans that I am worried about.

    The ethical problems that I am dealing with are all concerned with the protection of the vulnerable – the innocents who have no say in how the instruments are applied. I am surprised that you do not share my concerns about their protection. It is up to us – the adults and the robotics makers to ensure that our products are not used to abuse this population.

    As far as care is concerned, I have spent a lot of research time going through legislation and nanny codes of ethics. If you spend a little while on the internet you will find a number of paernt talking about how much more work they can get done now that they have a robot child minder – “I don’t have to listen to the lonely cries of my child at night any more while I am working.” The only law in the UK that applies is “negligence”. This has not been tested in court yet and I worry that the problem is that robots like PaPeRo have so much safety build in that when this is tested in court with tought corporate lawyers it will not be considered negligent to leave children with a robot for long hours.

    The use of robots in both elder care and childcare if far from being hype. Companies inn both S. Korea and Japan are a long way down that road. PaPeRo has been tested on 27,000 children in Japan by NEC and there has been extensive testing in Calfornian preschools.

    As far as the military robotics is concerned, that is well underway with billions of dollars being spent by all the US forces for autonomous systems. Currently there is always a person in the loop but not for much longer and there is evidence of proliferation everywhere..

    I have been working extensively with the military on both sides of the Atlantic and with international lawyers to find a way to stop this from going the way it is. But it is reminiscent of king canute. I cerntainly never suggested having namby-pamby guidlines. We are actually looking revising the geneva conventions with addtional protocols. It seems inevidible that there will be massive use of robots in future wars because of the military advantage they create. Nonetheless, we have moral duty to attempt to get international guidlines in place rather that sit smugly sniping from the wings.

  2. Noel Sharkey says:

    P.S. I just realised that you only read the report in the Independent which is afterall a newpaper. The press was because on article that I wrote for the Jounal Science: The Ethical Fronteirs of Robotics. I would recommend reading this as it has the proper references and details of what I am talking about.

    http://www.sciencemag.org/cgi/content/full/322/5909/1800

  3. stevegrand says:

    Hi Noel,

    When it comes to military robotics I was pretty much agreeing with you. If you can really alter the Geneva Convention then good for you. I wasn’t expecting such a significant change and thought you had something less potent in mind. Of course the Convention didn’t prevent nuclear proliferation and not everyone in the world sticks to it (thus upping the ante) but it’s certainly a step in the right direction. Warfare has been changing ever since the first automated guidance systems were introduced in the 1940’s and that trend is gathering pace. I’m absolutely not smug and I’m not sniping from the wings. I try to contribute to the debate where I can (bear in mind I’m only an amateur scientist), but I don’t believe I’m competent to engage in military ethics and thus should leave it to people who understand the complex political and social dynamics of warfare. If anybody needs to examine their conscience I’d hope it was the significant proportion of robotics researchers who receive military funding and sometimes maybe turn a blind eye to the potential consequences of their work.

    As for robot carers, of COURSE I’m concerned for the safety of the vulnerable. Making people more aware of their moral responsibilities towards other thinking beings is a primary reason I study AI and artificial life.

    I can’t find a source for your “lonely cries” quote (and can’t afford a subscription to Science) but anyone who genuinely believes such a callous thing and acts on it (notice their conscious awareness that the child’s cries are lonely ones) is clearly culpable of negligence and cruelty. Robot or no robot.

    For my part I can’t imagine ANY circumstances where leaving children in the care of a machine wouldn’t constitute negligence. I can’t really imagine why anyone would do research on the topic (which is a different field from certain kinds of elderly care) unless they don’t mean quite what they appear to mean. The Japanese have a very different mindset on such matters and it’s hard to tell how much it is justifiable to impose our Western cultural mores onto theirs, but PaPeRo is still a very long way from being a believable replacement for human carers. And it’s research, which is how we find out what works and what doesn’t.

    My overall point was that legislation is not always the best way to protect people, just as food aid is not always the best way to deal with famine. Well-meaning actions can have complex and unintentionally harmful knock-on effects. To my mind Negligence is precisely the right point in law for this sort of thing. We already have a solid legal system for deciding whether someone has acted negligently, on a case-by-case basis.

    In my view, legislation should be used only as a last resort, when it has been shown that a significant segment of the general public is incapable of exercising personal judgment. Otherwise, by taking the responsibility out of their hands you actually decrease their willingness and ability to think for themselves. As an analogy, many years ago some children were washed off the cliffs at Lands End and there was a huge outcry to protect the public by putting up railings. But that just leads people to assume that anything not protected by railings must be safe, which is presumably why a bunch of teachers thought that 10-foot waves and jagged cliffs were reasonable places for children to play in the first place.

    Assuming that you, too, were talking about negligence on the part of the parent, not the manufacturer, then corporate lawyers will be EAGER to show that the responsibility still rests with the parent and they’ve been negligent. Only the parent’s defence lawyer would try to argue that the blame lies with the robot or its creators, and I can’t imagine anyone would sell such a robot without a ream of disclaimers along the lines of “this is an inflatable toy not a lifesaver”. If they didn’t, and the product was found by a court to be marketed in such a way that a reasonable person would feel justified in relying on it, then it would be the manufacturer who was negligent. Either way, negligence seems like the right basis for a judgment.

    > It is up to us – the adults and the robotics makers to ensure that our products are not used to abuse this population.

    I totally agree. But this is not necessarily an argument for legislation or blanket guidelines. I think consumers and researchers should be encouraged to exercise personal judgment and responsibility first and foremost, and attempting to take this out of their hands by setting global guidelines or laws is to risk making the situation worse. In my post I cited an article in which people were ridiculously anxious to obey the letter of a religious law whilst flagrantly defying its spirit. People do that. The more you try to tie them down, the more they’ll feel justified in exploiting loopholes. Especially corporate lawyers…

    When it comes down to it, this is probably a question of deeper political philosophy, having little to do with robots. Some people favour central legislation and some prefer an emphasis to a greater or lesser degree on the freedoms and responsibilities of the individual. I doubt it’s simply a matter of one of us being right and the other wrong; capitalism, liberalism and socialism still stand unresolved as solutions in the wider sphere. I’m glad you’re saying what you think and acting as you feel is right. I’m trying to do the same. If we all did that there wouldn’t be any need for legislation at all.
    – Steve

  4. Noel Sharkey says:

    I don’t think that we are much in disagrement steve in all this steve.

  5. Noel Sharkey says:

    Hi Steve – I am back with more.

    I haven’t seen you for a few years and you can be forgiven for reading a newpaper article and thinking that I was just running off at the mouth. But not so. I started reading some of the US military plans on robotics about two years ago and thousands of tedious pages later I was on the path of warning the public and policy makers. And I could really do with the support of people like yourself and have been writing a number of artilces on it that are short, to the point and freely available (see below).

    I will be getting a link for the science paper that allows people a free download and will send you that when it comes. In the meantime here are some that are free:

    Sharkey, N. (2008) Casandra or the False Prophet of Doom: AI Robots and War, IEEE Intelligent Systems,
    http://www.computer.org/portal/cms_docs_intelligent/intelligent/homepage/2008/X4-08/x4his.pdf

    Sharkey, N. (2008) Grounds for Discrimination: Autonomous robot weapons, RUSI Defence Systems Journal.
    http://www.rusi.org/downloads/assets/23sharkey.pdf

    Sharkey, N. (2007) Automated Killers and the Computer Professional, IEEE Computer
    http://www.computer.org/portal/site/computer/menuitem.5d61c1d591162e4b0ef1bd108bcd45f3/index.jsp?&pName=computer_level1_article&TheCat=1015&path=computer/homepage/Nov07&file=profession.xml&xsl=article.xsl&;jsessionid=JVfL4XcJnychr1RX01hvQqrxWJLXmngRhvPfXrXkG1cTLrSJNLDD!-1582182879

    Sharkey, N (2007) Robot wars are a reality, Guardian newspaper.
    http://www.guardian.co.uk/commentisfree/2007/aug/18/comment.military

    You can pick up on many of the legal aspects on the Armed Unmanned Systems forum that I run with US Navy Chief Engineer, John Canning for the Association for Unmanned Vehicles Systems International (AUVSI)

    This has a much higher priority for me than the care issue – here is the hello kitty quote which I am sure you will find as outrageous as me:

    Like a family member 12/02/2007 – by Rachel La Mar from Claremont, CA US
    Robo Kitty is amazing! My 2 yo Max just LOVES his kitty friend. My husband and I are *very* busy lawyers. We leave by 6 am every morning and don’t get home until sometimes 10 at night (and don’t get me started on all the travel!). We’ve been through 8 nannies; the first 4 were pretty stressful. After that we got Robo Kitty, and it changed our lives. Max is so attached to her, he barely noticed when we let the last 3 nannies go. The 7th nanny refused to take a pay cut, EVEN THOUGH Robo Kitty did most of the work! Our new nanny is much better; Consuella works for only $7 an hour! Robo Kitty is like another parent at our house. She talks so kindly to my little boy. He’s even starting to speak with her accent! It’s so cute. Robo Kitty puts Max to sleep, watches TV with him, watches him in the bath, listens to him read. It’s amazing, like a best friend, or as Max says “Kitty Mommy!” Now when I’m working from home I don’t have to worry about Max asking a bunch of questions or wanting to play or having to read to him. He hardly even talks to me at all! He no longer asks to go to the park or the zoo – being a parent has NEVER been so easy! Thank you Robo Kitty!

    • stevegrand says:

      > I don’t think that we are much in disagrement steve in all this.

      That’s usually the case once everyone has aired their different perspectives. We both want robotics to be a positive thing for humanity, which is what counts. It’s true, I didn’t know this was a major hobby horse for you – I live in the US now, so I haven’t had much direct contact with the community for a while.

      Thanks for those links. I’ll follow them up. I’m sure other people will find them interesting too. Best of luck with your military efforts – I agree that this is a destabilizing moment in history. I hope it will ultimately lead to some good. After all, the idea of a war ultimately fought by robot against robot, with no humans in the loop, just points out how absurdly anachronistic the whole concept of warfare really is. Maybe a smart UAV would have the good sense to turn on its commanders. But in the mean time some really bad things could happen. If I can do anything useful, let me know.

      As for the Robo Kitty quote: you only had to mention they were both lawyers and I’d have understood! 😉 Presumably they in turn were brought up by the TV set, which explains their failed cognitive development…

      – Steve

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: