Although AI plays an integral role in our everyday lives, AI can also be used to create weapons of mass destruction, unlike anything the world has seen before.
A group of 4500+ AI and robotics researchers and experts are calling for a ban on lethal autonomous weapons now. The campaign is accompanied by this video:
The video is a good summary of their arguments, organized under three headings:
What are lethal autonomous weapons?
Lethal autonomous weapons mean weapons that are meant to kill humans without human intervention. They can locate, identify, and kill their targets without any input from a person. These weapons would react too fast for a human to maintain any meaningful control. They could be mass produced and programmed to target innocent civilians of specific ideology and they could be made small enough to enable the assassination of any political leader.
Why we should ban lethal autonomous weapons?
If you want an AI that can play a game, or can recognize objects and images, we can do that. But there is nothing that gives human-like judgment or ethical rules that are essential in making decisions about life and death. Unlike nuclear, biological or chemical weapons, lethal autonomous weapons can be developed cheaply, with easy to find materials and they can be more easily be hacked or fall in the wrong hands. One programmer could do what previously thousands of soldiers to do. It is going to be much easier for small actors, not even governments necessarily, to use them in nefarious ways, which could have an impact on democracy, freedom of speech.
Would a ban negatively impact research?
On the contrary, if we do not ban, it will hinder research: the technology would be stigmatized, and all the good things that AI could do won’t happen as the public will turn against the use of the technology in all aspects of our life.
Conclusion
We need to ban lethal autonomous weapons immediately before the arms race becomes irreversible. We must ensure AI is used to create a better world. Join the group of experts in their call for a ban at www.autonomousweapons.org
Guest post by The Futures Agency content curator Petervan
Other Resources
Our recent podcast based on chapter-5 of Gerd’s book Technology vs Humanity : the Internet of Inhuman things? Covering topics like responsible ecosystems and supply chains, and planning for externalities and unintended consequences.
More podcasts here
This topic is also discussed in chapter-8 of Gerd's book: “Precaution vs. Proaction”. Here is a quote from that chapter: “Imagine entering an arms race with AI-controlled weapons that can kill without human supervision.” And we have a podcast on that chapter upcoming, as well.
A set of posts and articles on Digital Ethics
And here is an extract from one of Gerd's keynotes, a specific section on “externalities”: