Chris Csikszentmihályi warns about killer robots in yet another article, this time in Good magazine: “Engineering Politics: What killer robots say about the state of social activism”:
Among the many changes in U.S. policy after 9/11 was one that went unnoticed by everyone except a few geeks: The military quietly reversed its longstanding position on the role of robots in battlefields, and now embraces the idea of autonomous killing machines. There was no outcry from the academics who study robotics—indeed, with few exceptions they lined up to help, developing new technologies for intelligent navigation, locomotion, and coordination. At my own institute, an enormous space is being out-fitted to coordinate robotic flying, swimming, and marching units in preparation for some future Normandy.
Yes, I'm fascinated with the speed with which the military robot has assumed a significant role in actually fighting wars, its potential for soon playing a revolutionary role, and the relatively small amount of public discourse on that potential revolution. And that's why I'm glad to see Chris Csikszentmihályi writing these articles.
But I don't get the weirdly literal reference to robots turning against their creators or the out-of-left-field positioning of open source as the savior that will prevent us from creating killer robots in the first place.
Posted by jjwiseman at August 19, 2007 11:11 PMI have a feeling that, like torture and tribunals, this will be re-examined with the next administration.
Posted by: Chris B. on August 20, 2007 08:32 AMI wanna be a robot.
Posted by: cody on August 20, 2007 10:49 AMI don't usually comment on this blog, but I like the rabbit art. I have to say that I'm into the idea of rabbits fighting back ala "Night of the Lepus" (the cutest horror movie ever made).
Posted by: jennie on August 21, 2007 06:41 PMThere was a related article by a professor of AI and robotics in the Guardian newspaper recently which you may like to read:
http://www.guardian.co.uk/armstrade/story/0,,2151357,00.html
Cheers,
Dave
Posted by: Dave P (UK) on August 25, 2007 04:28 PMIf the robot chooses its own targets then a malfunction could lead to it choosing friends as targets. Free Software ensures public review, which would hopefully make the software more reliable. And if the code can't rely on security-through-obscurity then hopefully it will use a more robust communications protocol. This will all make robots that are less likely to suffer from bugs, viruses or hijackers.
My own concern is that weapons platform robots will find their way into domestic use as quickly and with as little consideration as UAVs have.
Posted by: Rob Myers on September 5, 2007 07:37 AM