When Peter George saw news of the racially motivated mass-shooting at the Tops supermarket in Buffalo last weekend, he had a thought he’s often had after such tragedies.
“Could our system have stopped it?” he said. “I don’t know. But I think we could democratize security so that someone planning on hurting people can’t easily go into an unsuspecting place.”
George is chief executive of Evolv Technology, an AI-based system meant to flag weapons, “democratizing security” so that weapons can be kept out of public places without elaborate checkpoints.
Evolv machines use “active sensing” — a light-emission technique that also underpins radar and lidar — to create images. Then it applies AI to examine them. Data scientists at the Waltham, Mass., company have created “signatures” (basically, visual blueprints) and trained the AI to compare them to the scanner images.
This tool is worse than useless. It will create opportunities for more murders. That is, unless you are a tyrant intent on disarming your subjects.
First off, the mass shooter will start shooting before they pass through the detector, taking out the guards before they even had a clue a threat was present. And, since there is a “funnel” for people going through the detector there will be a group of people ready for “harvesting” by the perp. It also will make it difficult or impossible for people to defend themselves where these systems are deployed.
Hence, if your threat model is a mass shooter, the device will actually make things worse rather than better. Many other threat models suffer similar degradation of public security.
The threat model that doesn’t degrade is the one where you want your subjects to be more dependent on you for security and to make it difficult for them to threaten your position of power. In that case this system will be a useful asset to disarm your subjects.