Smart Guns With “Ethical AI” this Sounds like a Great Idea

You can read the research paper produced by the Rensselaer Polytechnic Institute’s Selmer Bringsjord, Naveen Sundar Govindarajulu, and Michael Giancola in its entirety here, but their basic idea is to incorporate AI technology into a smart gun that would determine when it’s ethical to pull the trigger. If the artificial intelligence doesn’t see a need for a gun to be used in a particular circumstance, then it would simply lock the firearm and render it useless to even authorized users.

As a hypothetical example, the trio use the 2019 shooting at an El Paso Walmart to describe how their technology might play out in the real world.

If the kind of AI we seek had been in place, history would have been very different in this case. To grasp this, let’s turn back the clock. The shooter is driving to Walmart, an assault rifle, and a massive amount of ammunition, in his vehicle. The AI we envisage knows that this weapon is there, and that it can be used only for very specific purposes, in very specific environments (and of course it knows what those purposes and environments are). At Walmart itself, in the parking lot, any attempt on the part of the would-be assailant to use his weapon, or even position it for use in any way, will result in it being locked out by the AI. In the particular case at hand, the AI knows that killing anyone with the gun, except perhaps e.g. for self-defense purposes, is unethical. Since the AI rules out self-defense, the gun is rendered useless, and locked out.

Full Story: https://bearingarms.com/camedwards/2021/02/24/researchers-smart-guns-ethical-ai-n41415

Comments

0 comments

close

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.