There's a lot of serious and legitimate concern lately about the development of autonomous weapons.
The Basic Idea
Humanity nearly has the technology to mass produce machines which can target and choose to kill humans, all without human intervention. So a factory worker turns on a machine, and then it goes out into the world and finds terrorists or protesters and kills them. The machine does this all by itself without any human controller actually making that final decision. In case you haven't been following the latest advances in robots and machine learning, we are nearly there.
I agree with the campaign to stop killer robots that we need to ban killer robots. But they also advocate banning research. Instead, we must only ban their manufacturing and activation, because banning research on killer robots is not only totally ineffective, but unwise.
This is Scarier than Nuclear Weapons
Nations have some control over nuclear weapons. The recent deal with Iran for example greatly restricts their ability to make nuclear weapons in exchange for improving their economy. That's because obtaining nuclear materials, refining them, and building facilities to make nuclear weapons requires a major industrial effort. Furthermore, nations that do so have trouble hiding it.
Automated weapons are especially frightening because none of these restrictions apply. In the video above, a mere hobbyist strapped a handgun to a quadcopter. All it's missing is a camera and the right software to be an automated weapon.
Why Ban Any Research?
We must ban some research - like trying to clone a half-human half-animal. A pig-man is unethical to create, even if it only happens in a lab. Does a pig-man have human rights? Did we force them to suffer pain their whole life with their hybrid physiology? Answering these questions reveals that the process itself and the product of the experiments are unethical.
However the process and results of researching automated weapons in a lab are not so clearly unethical. No one is being harmed by the research directly and it could have many benevolent uses - like non-lethal law enforcement, or wild animal control, or domestic robots, or even robots that disable other automated weapons. Similarly there's a reason organizations like the Centre for Disease Control hold on to the worst strands of Ebola and Anthrax. It's because there is, or there may be, some benevolent use of that type of research.
We should never criminalize knowledge or the people seeking knowledge. In the past, subjects that were taboo were the ones we most desperately needed to research! Sex, astronomy, and human biology were all forbidden research in the past.
The only reasonable times to criminalize science are from science fiction with world ending inventions like Ice-nine. With Ice-ninea crazy person with no resources could end the world. But even with technology as frightening as automated killing machines, we are not nearly there yet.
Banning Research Won't Work Anyway
We can't ban general research into robotics, computer vision, robot tool manipulation, etc, because this field involves far more than just killer robots. But breakthroughs in all these fields combined will eventually give us the ability to make killer robots whether we research it specifically or not. The designers of handguns and quadcopters probably didn't have this combination in mind, but once both were invented a hobbyist easily combined them. So if there's some magical ban on research, any interested nation will just direct their military to research each field individually.
Let's Ban The Manufacturing and Activation of Automated Weapons
This all comes down to human responsibility. Let's consider an example where a machine is designed, built, and activated. Then it decides by itself to go to some location and then decides to kill. Who is responsible for the murder?
- The researcher who designed the robot hand.
- The factory worker who builds generic robot parts like servo motors.
- The factory worker who can clearly see that the final product is an automated weapon.
- The software engineer who copies the killer robot software onto the robot.
- The factory manager who delivers the machine to customers.
- The owner or politician who activates the machine.
What do you think? I think people 1 and 2 are totally excused from the murder. As we progress into 3, 4, 5 and 6, the people become more and more responsible. Let's not ban research or the generic construction of robots (1 and 2). Instead let's ban the production and activation of automated weapons.
If you're interested in reading more, Elon Musk (SpaceX, Tesla), Stephen Hawking, and thousands of AI researchers recently signed this open letter on the subject.