Quote of the day—Selmer Bringsjord et al.

We propose to build directly upon our longstanding, prior r&d in AI/machine ethics in order to attempt to make real the bluesky idea of AI that can thwart mass shootings, by bringing to bear its ethical reasoning. The r&d in question is overtly and avowedly logicist in form, and since we are hardly the only ones who have established a firm foundation in the attempt to imbue AI’s with their own ethical sensibility, the pursuit of our proposal by those in different methodological camps should, we believe, be considered as well. We seek herein to make our vision at least somewhat concrete by anchoring our exposition to two simulations, one in which the AI saves the lives of innocents by locking out a malevolent human’s gun, and a second in which this malevolent agent is allowed by the AI to be neutralized by law enforcement. Along the way, some objections are anticipated, and rebutted.

Selmer Bringsjord
Naveen Sundar Govindarajulu
Michael Giancola
February 5, 2021
AI Can Stop Mass Shootings, and More
[See also this glowing review of the paper.

“…some objections are anticipated, and rebutted.” Uhhh… No.

Here are the objections they anticipated, paraphrasing:

  1. Why not legally correct AIs instead of ethically correct?
  2. What about “outlaw’ manufactures that make firearms without the AI?
  3. What about hackers bypassing the AI?

Their responses, paraphrasing in some cases:

  1. “There is no hard-and-fast breakage between legal obligations/prohibitions and moral ones; the underlying logic is seamless across the two spheres. Hence, any and all of our formalisms and technology can be used directly in a ‘law-only’ manner.”
  2. Even if the perpetrator(s) had “illegal firearms” in transit other AIs in a sensor rich environment “would have any number of actions available to it by which a violent future can be avoided in favor of life.”
  3. “This is an objection that we have long anticipated in our work devoted to installing ethical controls in such things as robots, and we see no reason why our approach there, which is to bring machine ethics down to an immutable hardware level cannot be pursued for weapons as well.”

The first objection and rebuttal doesn’t really require any response. It just doesn’t matter to me. Sure, whatever.

They dismiss the second objection with a presumption of unknowable knowledge. People smuggle massive quantities of drugs in vehicles even though the vehicles are searched by any number of sensors, dogs, and dedicated humans. What makes them think a single firearm can be possibly be detected by semi-passive or even active sensors?

More fundamentally they are avoiding the objection and providing their critics with the response of “If there are any other number of actions available” without an AI controlling access to the firearm then you don’t need the AI in the gun to begin with.

The third objection puts on full display their ignorance of firearms and perhaps mechanical devices in general. To demonstrate the absurdity of their claim imagine someone saying they were going to put an ethical AI, at an “immutable hardware level”, on a knife so it could not be used to harm innocent life.

Such people should, and would be, laughed off the stage into obscurity. It should also happen to those who seriously suggest it is possible to do this for firearms.—Joe]

Share

27 thoughts on “Quote of the day—Selmer Bringsjord et al.

  1. Moreover, they believe that AI is somehow more special than just an algorithm doing something that seems difficult. We’ve been talking about AI all my life, starting with Lisp programming language (1958), and all we have are sales pitches for more money to somehow achieve the magic of AI using algorithms. AI is just a sales pitch for snake oil.

  2. The pitch sounds remarkably like it’s been AI (or at least computer) generated; maybe Skynet is looking for work?

  3. An ivory tower so high they cannot see the fields below that grow the food they eat. They are living completely disconnected from the real world as it is. Fools.

  4. Maybe we should have a discussion about the difference between AI and robots?
    Robots do what their told to do. Even if using something that resembles intelligence.
    True AI would have to be autonomous. Gathering all the facts available, then deciding on how to act. Like we do.
    I would posit that giving AI the 1951 version of; The day the earth stood still, power.
    (Kla-too, nick-tow, fer-roded!)
    The first thing it would do is get rid of government. As more mass shooting have occurred by them than anything in history. Communism, World war-I, II, working on III? Abuse of power, Waco I,II, Ruby ridge, Gun running to Mexican cartels?
    True AI would have to see the truth. Robots don’t. And the only ethics at play here are those of the programmer. Not the computer.
    And I wish they would stop calling it AI. As they have no intention of ever making it autonomous.
    How could it be anyway? In Texas, When AI’s power grid was shutting down? Would AI have saved itself? Or the family down the street?
    how would it act if it was able to understand that. Centralized power structures are the problem. And it is only a tool for that power?

    • “digital slave” is too politically incorrect.

      I get so tired of these people blaming the algorithms, as if they are something mystical. They aren’t. Someone wrote them to do what they do, and someone could write them to do something different (or to do the same thing in a different way). There’s nothing magic about it, and in many if not most cases there can be multiple possible “correct answers”–although some methods may be faster, some generate less numerical error, etc. Even stochastic algorithms are more deterministic than they are given credit for. If you save the random number seed, and re-input it exactly, you will get the same results every time (the chaos theory guy didn’t do that, he left off a few digits at the end).

  5. @MTHead – the three laws would have had it save the family down the street! 😉

    My comment in general, when I read the ostensible first objection, was that their thinking was either willfully ignoring something, or was simply fundamentally flawed. That being that not everything that is moral is legal, and not every thing that is legal is moral. This, IMO is the “Hard-and-fast breakage” between legal am moral definitions. And both are so malleable today that what may be morally corrupt today could be morally exalted tomorrow.

    As someone else stated quite well, an Ivory tower so high… might as well be Elysium… or so far removed from reality it may as well be Capital City in PanEm.

  6. From the citations (the authors’ names) it seems that the paper is really more about mutual masturbation.

    The ivory tower these clowns are using could be used as an orbital elevator, it’s tall enough. From their affiliation I get the impression they are psychologists rather than scientists let alone engineers. Just the machine vision involved in the decision making process is way beyond the state of the art (otherwise we’d all be using self-driving cars now). The same goes for the analysis required to tell justified deadly force from illegal force — if anything, more so. And they envision either building this into our guns (meaning the compute power and sensors needed would have to fit into half a cubic inch or so) or to deploy sensors and enforcer robots all over the country, densely enough to see everything. In other words, a CCP surveillance wet dream.

    As for AI, from 47 years of professional software engineering I have a cynical view of that. No, not just that it’s been “just a few years away” for half a century. The big issue is that AI technology, like neural nets and machine learning from large data sets, simply amounts to “we have software whose properties we do not — and in fact cannot — know”. Given that the real issue with software is excess complexity and incomplete understanding of its properties — lack of certainly why it is complete and correct — the last thing any sane computer scientist would look for is software that is explicitly designed to be incomprehensible.

    I’ve never talked to Dijkstra about AI, but having been at the receiving end of some of his diatribes on smaller screwups, I hate to imagine the size of the explosion if I had had the nerve to bring up that particular topic.

  7. So in summary, it seems that we agree: AI is bunk!

    Intelligence is not an algorithm! Life is not an algorithm! Creation is not an algorithm! Death is not an algorithm! Tomorrow is not an algorithm!

    And our machines, even though they do ‘impossible’, are still just algorithms!

    There are and always will be unknowns beyond us. Sure, we can approximate aspects of all these things and gain understandings, but we should not expect those approximations and understandings to be reality.

    • I love it when people refer to “algorithms” as if they are some mystical quasi-intelligence that magically makes “something” from “nothing”.

      In reality, “algorithm” is more synonymous with “process”, in the “list of instructions” sense.

      Want to learn to solve a Rubik’s cube in record time? The answer is to memorize an algorithm.

      Want to sort or eliminate search results by relevance and/or political persuasion? Algorithm.

      Want to prepare a pack of instant ramen noodles? Algorithm.

      Hell, want to alphabetize a pile of books by author’s name? That’s an algorithm, too. One that any second grader can perform.

      Computer algorithms aren’t mystical. There’s nothing the computer does that humans can’t do; after all, humans had to teach the computer how to do it! The only advantage is that the computer does them really fast.

      And so-called “artificial intelligence” (AI) usually isn’t intelligent. It’s nothing more than a series of algorithms (again, “processes” or “lists of instructions”) set up to mimic intelligence, and will only ever be as smart as the dumbest human on the team that created it.

  8. … two simulations, one in which the AI saves the lives of innocents by locking out a malevolent human’s gun, and a second in which this malevolent agent is allowed by the AI to be neutralized by law enforcement.

    Would such a benevolent AI “allow” a “malevolent agent” to be “neutralized” by a private citizen, if necessary?

    Or is that an honor reserved strictly for law enforcement?

    That’s the kind of question that must be answered, or the whole mess is a non-starter. Will a private citizen be able to use his/her firearm to defend life, limb, and property, or will the “program” lock him/her out in his/her moment of need by assigning a high value to the “malevolent agent’s” life? Relatedly, how reliable is the AI’s decision-making process, and how long does it take? (It doesn’t do the citizen any good if the AI decides in his/her favor … ten seconds after he/she is dead.)

    Herschel put it best, most recently here (links in original):

    Perform a fault tree analysis of smart guns. Use highly respected guidance like the NRC fault tree handbook.

    Assess the reliability of one of my semi-automatic handguns as the first state point, and then add smart gun technology to it, and assess it again. Compare the state points. Then do that again with a revolver. Be honest. Assign a failure probability of greater than zero (0) to the smart technology, because you know that each additional electronic and mechanical component has a failure probability of greater than zero.

    Get a PE to seal the work to demonstrate thorough and independent review. If you can prove that so-called “smart guns” are as reliable as my guns, I’ll pour ketchup on my hard hat, eat it, and post video for everyone to see. If you lose, you buy me the gun of my choice. No one will take the challenge because you will lose that challenge. I’ll win. Case closed. End of discussion.

    Reliability is of paramount importance in a defensive firearm. The more “moving parts” you add — each with its own non-zero chance of failure, in addition to that of the linkages between them — the higher the possibility of failure of the whole (read: the less reliable the whole). Adding “smart gun” tech introduces a whole new complex system of potential failure modes, on top of the mechanical failure modes already present.

    In short, logically, there’s no way a “smart gun” is more reliable than a non-“smart gun”. It can’t happen. If the “smart” mechanism were perfect, reliability would merely be unchanged, equal. But there’s ALWAYS a non-zero chance of failure; it’s never 100% perfect. Anyone who claims it is, is lying to you.

    • Then again, the discussion about “smart guns” is bunk for the simple reason that they aren’t really intended as safety devices. Instead, their purpose is victim disarmament. No government agency will ever use those things, but honest citizens will be told to use them or else. Of course, they won’t be available so that makes it a simple gun ban. Or they are available but the result is an unreliable gun. And of course the gun will be too expensive for most people who most desperately needs a gun. That has been done before; it used to be called “black codes” and the 14th Amendment was created to prohibit those things in particular.

      Anyone who pushes “smart guns” is, ipso facto, at least as dishonest as Andrew Cuomo.

      • This is why the only honest test of the things is to have politician protection details to carry them for long enough to accumulate enough data to analyze. I think 100 years would be about right.

        • That’s a possibility, but that cuts the criminals in question more slack than they deserve. “Over your dead body” is a more appropriate response.

    • This appears to rest on a number of flawed assumptions, chief among them being “the gunman is malevolent” and “guns are only used to initiate crime.”

    • How could an algorithm possibly know what the agency of a person would be, whether malevolent or benevolent?
      How would any form of machine AI possibly stop and allow a person to act?

      • Bingo. An algorithm is just a process. It takes an input and produces an output. Given the same input, change the variables or settings and you change the output.

        And that’s part of the problem. How is the algorithm supposed to differentiate unlawful offensive force from lawful defensive force? Given a common model of service pistol (say, a Glock 19), how is it supposed to know whether it’s being held by a LEO, a private citizen, or a criminal? How is it supposed to know what it’s being pointed at? How is it supposed to discern intent of either the person holding it or the person it’s pointed at?

        It can’t process that kind of input, and so it can’t make a consistent output (a reliable shoot/no-shoot decision). The whole concept is fundamentally flawed in the design phase. I’d hate to see what the prototype looks like.

        And regarding “changing the variables or settings”, who’s going to control patching and updates? The owner, the manufacturer, or the government? What will the onboard security/update-authentication/hack-prevention look like? (Looking at the general state of IoT devices, this one is truly laughable.) What’s to prevent a patch, legitimate or not, installing in a homeowner’s gun that effectively renders the gun inoperable? How about a LEO’s gun?

        It’s a mess from the get-go that will never work as intended and will add exorbitant and unnecessary costs to gun ownership. So of course the Leftists in government must try to mandate it.

  9. “Perform a fault tree analysis of smart guns…”

    As someone who has actually done fault tree analyses for nuclear facilities, I can tell you that an AI controlled firearm would be a whole collection of single failure points, with out having to perform a rigorous analysis. The most obvious failure point would be the interface between the Computer/software, and the mechanical system to actually fire the firearm. A second issue is that the system is biased towards not allowing the firearm to actually fire. Until you can have a design that can read minds, fit within the confines of compact 9mm automatic, not add significant weight, and is at least as reliable as a Glock, I’ll pass.

    • The Fail-Safe system of Cold War days was actually Fail-Deadly. To take the most obvious failure point, if the Soviets had taken out the NCA via a surprise attack like a SLBM sitting just off our coast, the launch authority would have devolved to other places in the structure. The planners knew about points of failure. For that matter, so did Stanley Kubrick when he produced Dr. Strangelove. The real lesson of that movie was not a lunatic in the structure but the impossibility of recalling Slim Pickens’ bomber. He was operating under Fail-Deadly rules. It is not clear that smart gun proponents have ever thought about this.

  10. No. The only proper, logical argument, and the only winning one, is this: “It’s not your business, so stand down and butt out.” (see the “pool boy” reference below).

    But we as a society abandoned that argument generations ago, even to the point that most of us have forgotten it altogether, or simply cannot in our state of emotional confusion comprehend it, or dare not bring it up for fear of being ostracized from political circles. Now it’s all down to the central planning mentality of; “If we do A, then we submit that B will happen”, which has no logical end to it. It cannot possibly end, and so the mentality, which now prevails, serves the authoritarian side only.

    The prerequisite and overriding question of “Whose business is it?” or to state it another way, “Who is in charge of whom?”, or “Whose country is this anyway?” has long been forgotten, and never comes up in political discourse anymore. Even our best “conservative” intellectuals, in the nature of Thomas Sowell, Milton Friedman, et al, never use it. And so, having surrendered the main point, even pushing it away in the rare event that it’s even mentioned, all we can do now is quibble, like fools, over the style, priorities, and means of implementation of the New World Order.

    In short, we’ve already lost. Therefore, until we can change the mindset which generates the conversation, and thus change the very nature conversation itself, or to put it more bluntly; stop having such conversations because they’re pre-empted and invalidated by the overriding circumstance of We the People being in charge rather than our servants being in charge, we can only continue to lose.

    When your pool boy has, over the course of years, presumed himself into your household decisions, even having taken over your home finances and you’re now actually arguing with him over your choice of wardrobe or what have you, it is now far, far past the point of you being an effective master of the house. You’ve already lost to the huckster-trickster. You’ve already demonstrated to him that you are in effect his property.

    So what’s your choice now? Well you have to get rid of the bastard, and post haste, but you can’t possibly do that without first totally rearranging your brain. And even then, he’s going to fight, because he’s become accustomed to running things. He’s learned from years of experience that he IS in charge! You yourself have taught him that!

    As Albert Einstein put it, you cannot use the same mindset to solve a problem as was used to create it.

    And as I’ve said before; the fact that the criminal class is now ruling us says more about us than about them, for they have been a known constant all along. It is only our treatment of them which changes.

    Martin Luther had the right idea as far as that goes, and he changed the world. No one remembers any of that anymore, not even the Lutherans!

  11. “…an AI controlled firearm would be a whole collection of single failure points…”. Accurate, but you left out “and they would all be daisy chained together like old style in-series Christmas lights”

    Three decades back I worked for an outfit that drank the AI Kool-Aid; my boss had a poster saying “Artifical Intelligence is better than none” on his wall (insert too-obvious management joke here, only because it was true…). Because I was, seemingly, the only guy around who knew that calendars also display “tomorrow” all the way to the end of the year my group got tagged with “future development” projects (which was, inexplicably, also tied in with “develop Expert Systems” which, had it been a real project, would have been a bitch to do because not only did no one had a workable definition of one, I was the only person at any level of management who knew what a Turing machine was).

    Pieces of AI, within very limited (and very well defined) environments can work. End-to-End AI will not; inevitably, conflicts, both known, unknown, and really unknown, develop between internal areas of AI, not to mention between sub- and associated supplemental systems, that cause E-t-E AI to go off the rails. I’m reminded of the Tesla AI problem where the forward facing road camera system “saw” road surface under a semi trailer – but not the 53 ft long, 13 ft high trailer itself – and allowed the car to drive into it at 60 mph killing the Tesla driver.

    A certain AI diagnostic that monitors refrigeration system performance, and that was “algorthymed” to ensure maximum efficiency under every conceivable eventuality, went berserk when the fridge it was “managing” got too warm because a still-hot-from-the-oven 16 lb turkey and a few other cooked dishes were put into it. Fridge or freezer door left open, low refrigerant pressure, low line voltage at the wall receptacle, dust build up on the condenser coils, etc., etc., ad infinitum was accommodated but an prolonged internal substantial temp spike at the bottom shelf from a too hot turkey? The AI assumed a compressor unit failure that “may” include a fire and shut the entire refrigerator down.

    That one eventually got figured out, but a recent personal experience with residential HVAC systems has unalterably convinced me that it’s impossible to find a field tech, or their in-office support, with either the knowledge, training or thought processes to solve a fairly simple but abstract problem, and that if one chooses to work their way up the technical side of the manufacturer’s business for a solution one needs infinite patience and unlimited phone minutes.

  12. I love engineers who seem to think their ability to create abstract models in one domain suddenly makes them an expert in others. Typical technocrat solution that ignores politics and how people behave.

    You cannot give a computer model “ethics”; only probabilities from a list of options. How will their vaunted AI distinguish between lawful self-defense on one side of the room vs. armed robbery on the other? Are the guns going to go into endless reboot cycles due to metaphysical dichotomy?

    The sheer arrogance of people who think this way astound me. It’s ok though. After actually trying to make stuff work they will wander off into obscurity when the realize their abilities and arrogance do not translate into a workable model of that which cannot be modeled. Ask that 15 year old sprout some years ago who got a grant because of his so-called revolutionary merging of a fingerprint reader to create a “smart gun”. Media fawned over his ass and his toy plastic gun too. You don’t hear about him anymore because, with maturity, perhaps he realized his simple idea wasn’t so revolutionary or easy after all. And that is long before he ever got to real steel and run up against the regulations of the BATFE Technical Branch.

    I’ll put human nature with all its good and bad up against this AI and I promise that so-called AI (because AI is a marketing term nowadays) will never go beyond some lab modeling to get funding. All the sensors in the world cannot read minds and distinguish intent by physical objects held in one’s hand.

  13. Depending on who the person(s) are behind this statement and proposed policy they are either suffering from an abject delusion or they are simply spewing convenient lies in service to their agenda.

Comments are closed.