Ethical rules for robots?
December 20, 2008 6 Comments
Prof. Noel Sharkey was interviewed in the Independent yesterday, raising concerns about the dangers of children and the elderly being left in the care of robots for too long, thus lacking human contact. The article brings up Asimov’s laws, and although Noel doesn’t advocate programming such rules into the robots themselves, he does think we need an official set of guidelines for the use of robot carers.
I remember having a conversation with Noel about such things in my car once. I don’t think I agree.
It seems like needless scaremongering to me. Yes, one day robots might be useful enough to look after the kids for a while but we really haven’t reached that point yet, despite the hype. Anyone who uses a present-day robot in such a way is self-evidently irresponsible and culpable. Some people think that a packet of candy is capable of looking after their children while they go off partying. Some people think a nursing home can look after granny so that they needn’t be bothered with her any more. But we don’t blame the candy or the nursing home – we rightly blame the culprits.
We can’t easily legislate for that, any more than we can set guidelines for the safe use of candy. In the case of guns I can see the logic – a gun is exceptionally dangerous, it can be misused very easily and at little risk to its owner, and it is specifically designed for killing. But stupid people will always do stupid things, and we can’t set guidelines for every single item that they might misuse in the process.
Asimov’s laws are a case in point. He had to extend them because he realized they were incomplete and contained dangerous loopholes. It wouldn’t be difficult to find loopholes even in his extended set. The harder you try to legislate, the more loopholes you create. I’m all for reasonable curbs on irresponsibility, but specifying the rules down to the letter just makes people feel that sticking to the letter of the law permits them to flout its spirit (see this astounding article for an example).
I accept that robot carers are DESIGNED to supplement human care, and hence could be misused without much thought. But I think the responsibility for this should be exercised individually, just as it is for other household tools. Any new technology requires acts of responsibility from its creators, its marketers and its consumers. But to call for guidelines now is surely going to cause more trouble than it prevents. Society is full of negative feedback loops – whenever something new comes along, the old adapts to it. Trying to legislate for the new on the assumption that nothing else will change can be a dangerous mistake.
Let’s at least wait until robots are REALLY being used and abused in this way, rather than adding to the already quite hysterical fear of intelligent technology. Otherwise we’ll never gain the benefits because people will be scared off.
More importantly, let us, as researchers, exercise our own intelligence and responsibility, rather than expecting a bunch of lawyers to do it for us. As a case in point, I remember seeing a robot designed to help someone with paraplegia feed himself. But the robot was facing the user, making him into a “patient” and the robot into his “carer”. Being fed by a robot has surely got to be demeaning. All the idiots had to do was turn the thing around so that the robot’s arm was positioned where the human’s arm would have been if he were able-bodied, and he would have been “using a tool to feed himself” instead – a far less insulting prospect. Researchers should think about these things, not rely on legislation to do it for them.
Military robots are quite another matter. When one army can attack another without any risk to its own men or those of its enemy (since we can presume that both sides will eventually have the same robots), then the only human casualties will be innocent bystanders. Somebody needs to be working hard on figuring out the consequences of all this for world peace. But a namby-pamby set of guidelines isn’t going to cut it in this arena.
Incidentally, my position on Asimov’s Laws is that any robot capable of keeping to them must, of necessity, be capable of breaking them. Therefore they are useless. Discuss.