We propose to build directly upon our longstanding, prior r&d in AI/machine ethics in order to attempt to make real the bluesky idea of AI that can thwart mass shootings, by bringing to bear its ethical reasoning. The r&d in question is overtly and avowedly logicist in form, and since we are hardly the only ones who have established a firm foundation in the attempt to imbue AI’s with their own ethical sensibility, the pursuit of our proposal by those in different methodological camps should, we believe, be considered as well. We seek herein to make our vision at least somewhat concrete by anchoring our exposition to two simulations, one in which the AI saves the lives of innocents by locking out a malevolent human’s gun, and a second in which this malevolent agent is allowed by the AI to be neutralized by law enforcement. Along the way, some objections are anticipated, and rebutted.
“…some objections are anticipated, and rebutted.” Uhhh… No.
Here are the objections they anticipated, paraphrasing:
- Why not legally correct AIs instead of ethically correct?
- What about “outlaw’ manufactures that make firearms without the AI?
- What about hackers bypassing the AI?
Their responses, paraphrasing in some cases:
- “There is no hard-and-fast breakage between legal obligations/prohibitions and moral ones; the underlying logic is seamless across the two spheres. Hence, any and all of our formalisms and technology can be used directly in a ‘law-only’ manner.”
- Even if the perpetrator(s) had “illegal firearms” in transit other AIs in a sensor rich environment “would have any number of actions available to it by which a violent future can be avoided in favor of life.”
- “This is an objection that we have long anticipated in our work devoted to installing ethical controls in such things as robots, and we see no reason why our approach there, which is to bring machine ethics down to an immutable hardware level cannot be pursued for weapons as well.”
The first objection and rebuttal doesn’t really require any response. It just doesn’t matter to me. Sure, whatever.
They dismiss the second objection with a presumption of unknowable knowledge. People smuggle massive quantities of drugs in vehicles even though the vehicles are searched by any number of sensors, dogs, and dedicated humans. What makes them think a single firearm can be possibly be detected by semi-passive or even active sensors?
More fundamentally they are avoiding the objection and providing their critics with the response of “If there are any other number of actions available” without an AI controlling access to the firearm then you don’t need the AI in the gun to begin with.
The third objection puts on full display their ignorance of firearms and perhaps mechanical devices in general. To demonstrate the absurdity of their claim imagine someone saying they were going to put an ethical AI, at an “immutable hardware level”, on a knife so it could not be used to harm innocent life.
Such people should, and would be, laughed off the stage into obscurity. It should also happen to those who seriously suggest it is possible to do this for firearms.—Joe]