December 26, 2008 7 Comments
In another post, Noel Sharkey and I have been debating the control or otherwise of robots designed/presumed to care for children. I stand by my mistrust of legislation and top-down guidance on topics like this, preferring education and individual accountability. But when it comes to military robots, Noel and I are on pretty much the same wavelength, so I wanted to make a separate post about that to alert people to the topic and the issues he raises.
When it comes to the use of stupid but autonomous robots for military applications, I agree with Noel in large part. So I suggest you read his IEEE Intelligent Systems magazine article here.
I don’t pretend to know what we can do about this. It’s one of those “if we don’t do it, they will” situations, and those are very dangerous feedback loops that cause people to do things they know to be crazy.
Ironically, if we ever get genuinely intelligent robots, with intelligence on a par with humans, then I’m convinced they’ll qualify as moral beings themselves and the problems will get easier. The currently looming quandaries only apply to stupid automatic systems. We have had stupid automatic weapons for a long time – mines are an obvious example. We don’t even expect these to discriminate between military and innocent targets, but once their behavior becomes more conditional and they are expected to make decisions and believed to be right, a lot of dangers arise.
I do hope people can keep these ideas separate from the general fear of robots, however, since the latter is misguided. We’re not talking robotic warriors of the Hollywood variety here. In fact it’s their complete stupidity that’s the problem.