“Gun violence in this country is an epidemic, and it’s an international embarrassment,” President Biden recently said. At least 45 mass shootings have occurred in America in the last month, according to reports. In the same time period, news of police officers killing unarmed Black men and boys, including 20-year-old Daunte Wright in Minneapolis and 13-year-old Adam Toledo in Chicago, sparked waves of protest around the country.
These all-too-common tragedies could be significantly reduced — and even eliminated — without any of the partisan rancor and gridlock typically associated with gun-related debates, says Selmer Bringsjord, an expert in artificial intelligence and reasoning and a professor of cognitive science at Rensselaer Polytechnic Institute.
“There is a solution,” Bringsjord, the director of the Rensselaer AI and Reasoning Laboratory, wrote in the Times Union. “A technological alternative to the fruitless shouting match between politicians: namely, AI — of the ethical sort. Guns that are at once intelligent and ethically correct can put an end to the mass-shooting carnage.”
Rather than an endless debate over whether the public should have more guns or less, Bringsjord’s novel – and, he says, plausible – proposal is to shift to “smart and virtuous guns, and intelligent restraining devices that operate in accord with ethics, and the law.”
Along with his coauthors, Bringsjord detailed his ideas in a recent paper, “AI Can Stop Mass Shootings, and More.” Anticipating some counterarguments, the authors urge readers “to at least contemplate whether we are right, and whether, if we are, such AI is worth seeking.”
Bringsjord and his collaborators have created simulations showing how, in only 2.3 seconds, ethical AI technology can perceive a human’s intent and environment and then, if necessary, prevent their gun from firing. Importantly, he notes, the same technology that could prevent a criminal from opening fire in a public area could also prevent a police officer from shooting a person who posed no threat.
“Ultimately research along this line should enable humans, in particular some human police, to simply be replaced by machines that, as a matter of ironclad logic, cannot do wrong,” Bringsjord said in a recent public radio segment.
The AI capabilities discussed by Bringsjord are the product of prior work over seven years of funding from the Office of Naval Research devoted to developing moral competence in robots.
Bringsjord has spoken about robots and logic at TEDxLimassol. He is the author of What Robots Can and Can’t Be and Superminds: People Harness Hypercomputation. He is also the co-author of Artificial Intelligence and Literary Creativity: Inside the Mind of Brutus, a Storytelling Machine.
Bringsjord is available to speak about his recent proposals around AI-enabled guns, as well as other aspects of AI, human and machine reasoning, and formal logic.