Musk, Hawking, and Wozniak Sign Letter To Ban Autonomous Weapons (Killer Robots)

“There are many ways AI can make battlefields safer for humans without creating new tools for killing people.”

(July 27,2015) – Buenos Aires, Argentina

 

(Flickr: Elon Musk – The Summit 2013)

Elon Musk, Steven Hawking, and Steve Wozniak have signed a letter calling for world governments to ban the development of “offensive autonomous weapons” (i.e. killer robots) to prevent a military AI arms race. The letter will be presented by the Future of Life Institute at the International Joint Conferences on Artificial Intelligence (IJCAI) in Buenos Aires tomorrow.

According to the Bulletin of the Atomic Scientists (US Killer Robot Policy: Full Speed Ahead), the United States already has an established policy for autonomous weapons (i.e. killer robots).  “In November 2012, United States Deputy Defense Secretary Ashton Carter signed directive 3000.09, establishing policy for the ‘design, development, acquisition, testing, fielding, and … application of lethal or non-lethal, kinetic or non-kinetic, force by autonomous or semi-autonomous weapon systems.’  Without fanfare, the world had its first openly declared national policy for killer robots.”

The United Nations First Committee (Disarmament and International Security) met for the first time to discuss related issues in October 14, 2014. The meeting minutes (Development of New Warfare Technologies Like ‘Killer Robots’ Raises Concerns in First Committee About Machines Taking ‘Life-And-Death’ Decisions) sheds light on concerns raised by member states.

An Open Letter from AI & Robotics Researchers

The open letter from the Future of Life Institute is below.

Autonomous weapons select and engage targets without human intervention. They might include, for example, armed quadcopters that can search for and eliminate people meeting certain pre-defined criteria, but do not include cruise missiles or remotely piloted drones for which humans make all targeting decisions. Artificial Intelligence (AI) technology has reached a point where the deployment of such systems is — practically if not legally — feasible within years, not decades, and the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms.

Many arguments have been made for and against autonomous weapons, for example that replacing human soldiers by machines is good by reducing casualties for the owner but bad by thereby lowering the threshold for going to battle. The key question for humanity today is whether to start a global AI arms race or to prevent it from

Wikimedia: Gameplay in the online multiplayer mode (Modern Warfare 2)

starting. If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow. Unlike nuclear weapons, they require no costly or hard-to-obtain raw materials, so they will become ubiquitous and cheap for all significant military powers to mass-produce. It will only be a matter of time until they appear on the black market and in the hands of terrorists, dictators wishing to better control their populace, warlords wishing to perpetrate ethnic cleansing, etc. Autonomous weapons are ideal for tasks such as assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group. We therefore believe that a military AI arms race would not be beneficial for humanity. There are many ways in which AI can make battlefields safer for humans, especially civilians, without creating new tools for killing people.

Just as most chemists and biologists have no interest in building chemical or biological weapons, most AI researchers have no interest in building AI weapons — and do not want others to tarnish their field by doing so, potentially creating a major public backlash against AI that curtails its future societal benefits. Indeed, chemists and biologists have broadly supported international agreements that have successfully prohibited chemical and biological weapons, just as most physicists supported the treaties banning space-based nuclear weapons and blinding laser weapons.

In summary, we believe that AI has great potential to benefit humanity in many ways, and that the goal of the field should be to do so. Starting a military AI arms race is a bad idea, and should be prevented by a ban on offensive autonomous weapons beyond meaningful human control.

 

Photo: Wikimedia

Subscribe@Raw Science TV |Facebook@Raw Science |Twitter@RawScienceTV |YouTube@RawScience1

Be first to comment