A who’s who of CEOs, engineers and scientists from the technology industry have signed a global pledge to “neither participate in nor support the development, manufacture, trade, or use of lethal autonomous weapons”.
Co-organised by Toby Walsh, Scientia Professor of Artificial Intelligence at the University of NSW, the pledge was signed by 150 companies and more than 2,400 individuals from 90 countries working in artificial intelligence (AI) and robotics.
The pledge was released in Stockholm at the 2018 International Joint Conference on Artificial Intelligence (IJCAI), the world’s leading AI research meeting with over 5,000 attendees.
Corporate signatories include Google DeepMind, University College London, the XPRIZE Foundation, ClearPath Robotics/OTTO Motors, the European Association for AI, and the Swedish AI Society.
Individuals include head of research at Google.ai Jeff Dean, AI pioneers Stuart Russell, Yoshua Bengio, Anca Dragan and Toby Walsh, along with SpaceX and Tesla CEO Elon Musk, and British Labour MP Alex Sobel.
The pledge, led by the Future of Life Institute, challenges governments, academia and industry to follow their lead, saying: “We, the undersigned, call upon governments and government leaders to create a future with strong international norms, regulations and laws against lethal autonomous weapons. … We ask that technology companies and organisations, as well as leaders, policymakers, and other individuals, join us in this pledge.”
Machines do not have ethics
Speaking in Stockholm, Toby Walsh said: “We cannot hand over the decision as to who lives and who dies to machines. They do not have the ethics to do so. I encourage you and your organisations to pledge to ensure that war does not become more terrible in this way.”
Max Tegmark, a physics professor at the Massachusetts Institute of Technology and president of the Future of Life Institute, called on others to join the pledge.
“I’m excited to see AI leaders shifting from talk to action, implementing a policy that politicians have thus far failed to put into effect,” Tegmark said. “AI has huge potential to help the world – if we stigmatise and prevent its abuse. AI weapons that autonomously decide to kill people are as disgusting and destabilising as bioweapons, and should be dealt with in the same way.”
Lethal autonomous weapons systems (LAWS) – also dubbed ‘killer robots’ – are weapons that can identify, target, and kill a person, without a human ‘in-the-loop’. That is, no person makes the final decision to authorise lethal force: the decision and authorisation about whether or not someone will die is left to the autonomous weapons system. This does not include today’s drones, which are under human control; nor autonomous systems that merely defend against other weapons.
Increasing military role
The pledge begins with the statement: “Artificial intelligence is poised to play an increasing role in military systems. There is an urgent opportunity and necessity for citizens, policymakers, and leaders to distinguish between acceptable and unacceptable uses of AI.”
Ryan Gariepy, Founder and CTO of both Clearpath Robotics and OTTO Motors, a strong opponent of lethal autonomous weapons, echoed the call: “Clearpath continues to believe that the proliferation of lethal autonomous weapon systems remains a clear and present danger to the citizens of every country in the world. No nation will be safe, no matter how powerful.
“Clearpath’s concerns are shared by a wide variety of other key autonomous systems companies and developers, and we hope that governments around the world decide to invest their time and effort into autonomous systems which make their populations healthier, safer, and more productive instead of systems whose sole use is the deployment of lethal force,” he added.
The next UN meeting on LAWS will be held in August.
Signatories hope their pledge will encourage lawmakers to develop a commitment to an international agreement between countries.