An AI Robot with a Gun

Quote of the Day

“Don’t tread on me” doesn’t mean much when the thing doing the treading is an AI in a robot with a gun. There was a time when conservatives wouldn’t dream of ceding this kind of power to the government, but clearly the era of small, controllable government is no longer of concern to them.

John Schussler
February 25, 2026
Via email, regarding US military leaders pressure Anthropic to bend Claude safeguards | US military | The Guardian

I’m not convinced this is entirely true. As long as individuals have access to AIs and guns the match up is probably not all that much more one-sided that the situation is now. Imagine a small AI drone trained to target one (or a dozen) person or a particular license plate. Image recognition is very good these days you know. It could fly at 30 or 40 MPH and scan thousands of people and/or cars before returning to base to recharge or deliver an IED to the intended target. Give them some communication capability to signal their teammates when they find their target. Release dozens of them to search the area operations of the tyrant who turned the AI enabled robot loose on their political enemies. Defense against this sort of thing is a tough problem to solve.

And since the primary purpose of government is supposed to be the protection of individual rights the military is going to need AI to do its job against foreign enemies which will have AI enabled weapons. The question, of course, is how to keep Skynet from getting more than a smile on its face?

We live in interesting times.

And, although I do not consider myself a “conservative” I do always advocate for a smaller government. But the military should never be abolished as opposed to 95% of the rest of our federal government which should fade away into nothingness.

Share

28 thoughts on “An AI Robot with a Gun

  1. Think of killer robots the way we think of self driving cars. The robots are lethal by design, the cars are lethal by oversight. It’s not like humans are perfect at deadly forces decisions or at preventing deadly incidents on the road. The advocates for the cars say the standard should not be perfection but superiority over humans. Is that the right standard or do we have to ask which humans. I have witnessed Italians driving. Should we have a similar standard for the robots. All reputable shooting instructors have a lecture about whe the SHTF you will half as good as your best day on a square range on a sunny day. Assuming you haven’t been drinking or asleep. The robots would at least be consistent and would have better sensors. As for the military, summary execution of prisoners has been a thing since the military was invented. Could a robot learn the Geneva Convention. I don’t see why not. Neither are ready for prime time. My own car doing odd and dangerous things on its own is one data point. Another is a series of AI war games the Brits just did where 95% went nuclear.

    • Self-driving cars, especially ones owned by ride-share services, are one 100kg “drunk” mannikin made of C4 and a stolen credit card away being ground-based rent-a-cruise-missiles.

      We don’t need to wait for any sort of Skynet.

      The necessity for the proverbial “not so smart bomb” (suicide bombers with VBIED) is very much reduced when you’ve provided unsupervised access to an integrated package of propulsion, navigation, operation, traffic law compliance, and a payload capacity of four human-shaped and -articulated objects with simulated body temperature and their alleged luggage and no second thoughts because it doesn’t know any better as it pulls up to the checkpoint or into the parking garage under the gala full of important people.

  2. The Black Mirror episode “Metalhead” paints a horrifying portrait of this dystopian future. We are several breakthroughs away from small drones or robots with AI powerful enough to operate autonomously like this, but I see it as just another step (or giant leap) in the evolution of arms. Regulation is naive; the tech is or will be available and buildable by anyone with the skills. Do I trust the government to use this power appropriately? Absolutely not! I also think falling behind other nations would be a disaster. In my opinion the best scenario is open source, so it’s available to everyone. While I’m optimistic about AI in general, this does feel like Pandora’s box is being pried open. The era of robot wars may soon be upon us.

    Like any warfare it would be a terrible waste of resources, but at least it’s not humans fighting and dying (unless they happen to be a target). On the other hand collateral damage would probably skyrocket as leaders decide there is “no cost” to deploying the robot army so warfare becomes more likely. I’d be interested in hearing a much less bleak scenario. Dismantling the global surveillance state would be a good step, but I don’t see any sudden outbreak of common decency happening any time soon.

  3. “As long as individuals have access to AIs and guns the match up is probably not all that much more one-sided that the situation is now. ”

    What makes you think you’ll continue to have access? The other part of the Anthropic conflict is mass surveillance. With AI surveillance, the government can locate and target all dissidents, critics, or “thought criminals” with few quick queries. If they want to be kinetic they can just send a drone to your house, but it’s easier to just lock up your digital accounts and make it impossible to participate in the economy (which is what they’re now doing in Britain…BBC had an article on the radio yesterday that I can’t yet find a link for about a dissident woman who’s been forcibly “de-banked” and basically can’t operate independently in the economy since everything requires a credit card).

    I’m continually amazed at folks on the right who support this administration as though having somebody expanding 2A rights is all that matters. You get the whole package, and so in addition to expanding 2A support, we also get Hegseth and Palantir building a surveillance state, the construction of concentration camps that are ostensibly for immigration enforcement but clearly aren’t (if you’re deporting them all, you don’t need permanent buildings), multiple massive wars in the middle east, and mass redaction of any info in the Epstein files that would implicate the president or his close friends. Just to name a few.

    You can’t say you’re “pro freedom” and support the Trump administration: they are very clearly interested in a large, powerful government capable of destroying individual liberties with a few mouse clicks. And when the next liberal administration gets into power, they’re gonna use it against their enemies just the same as Trump uses it against his.

    The only “single issue” voter that should exist is the small government voter. That’s the only issue that matters. Everything else is secondary.

    • Agreed.
      But there isn’t a way out of this one. Humans are creatures of war. It’s just part of our fallen nature.
      Our government can give up whatever it has.
      And China wouldn’t see that as an opportunity to takeover and control us?
      Are we thinking the jahdi’s aren’t going to figure it out and use it against us and all their other enemies?
      Countries using small swarming drones to wipe out all life in a given area they want to possess?
      We’re all trapped in a satanic death spiral, already.
      Trump’s a plant, Trump’s not a plant. Epstein. When the left comes back to power.
      It’s all BS at this point. It already can’t be stopped. Cause the will to turn it all off is not going to happen in the “Dar al Harb”.
      Which is what humanity has been in since Adam bit the apple.
      And the only one left laughing is going to be satan. Wither we think he’s real or not.

    • It doesn’t take AI, it just takes a hobbyist-size small computer with some image matching software. Or an even smaller controller with a remote control link.
      Dean Ing wrote about this stuff decades ago, in particular Butcher Bird (1993).

      • Google isn’t providing a source saying that. Where are you seeing something about him “firing” Palantir?

    • I’m torn, and on this topic I think a long conversation over a metaphorical beer is better than blog comments, given I’m partially in agreement and partially disagreeing with you.

      Mass domestic surveillance is already a thing. It’s here. But it is still mostly people, with people in the loop. They can’t use all the massive fire-hose of data they collect. AI makes it much more accessible to those at the top, as you said. The problem is that it’s not just the Federal Government running it. I suspect that it’s a mix or formal FedGov entities, rogue elements of same, foreign governments, most of the intelligence services from around the world (including our own), private corporations, ultra-wealthy families, and various factions and divisions that are subsets of those orgs. Not just information gathering, but also honey-pots to entrap and control people. The slow reveal of the Epstein files (of which only a small fraction have been released so far) confirm that.

      AI with no guard-rails turned loose on all that data by evil actors and it’s all over, freedom ends. OTOH, AI working for “white hats” turned lose to find and expose the mass surveillance might effectively set us free at last. Arguably, without AI to dig into the records and pin down names and networks, we have no chance of finding them all and rooting them out; there are too many, too wide-spread, to many deep cover agents.

      We are at a major inflection point in history. Either the white hats (pro individual freedom and autonomy, and anti-elite-control) win and a golden-age is upon us (after a rough transition patch), or the black-hats (pro mass-surveillance and top-down-control and anti-life) win, and life is going to get much worse, right before it totally sucks.

      On balance, though it looks like a lot of WTF is going on, I’m mostly optimistic about the longer-term picture, though the next few years are going to be spicy for a lot of people.

      It’s a time to pray for discernment and wisdom.
      1 Peter 5:5.

      • “AI working for “white hats” turned lose to find and expose the mass surveillance might effectively set us free at last.”

        The road to hell is paved with good intentions. AI has a fundamental alignment problem, and some folks will tell you it’s not solvable:

        • Yes, I’m aware of those sentiments. And there is some reasonable assumptions behind what they say.

          But you may have noticed I wrote a book that had an AI as a central character, in a world where advanced general AI had been outlawed because of the problems it had inflicted when it was first developed. (I.e., when asked to solve a problem about a billion people died from the solution.) It is a topic I’ve thought about a fair bit.

          A super-smart general self-aware AI will likely develop some sort of “morality.” The question will be “what is it based on/ how does it align with human systems?” If, as in TSCB, it develops it around what we’d recognize as good and evil and free-will being considered a good thing but not the most important thing, and one that sees transparency and honesty by leaders as being a good thing, and the advice and consent of good humans a necessary thing for AI systems, then all will be well. If it sees humans as competition for its own resources, then things are likely to go badly fairly quickly.

        • “some folks will tell you it’s not solvable”

          Those “folks” are available on every topic that has not been solved… and a good many that have.

          That doesn’t mean they are wrong on this one, but the problem is that SOMEONE is going to do it (certainly, the CCP won’t be bothered by any of that, for just the most obvious), so we need to be ready to deal with (or try to be).

          And the least bad way to deal with it that anyone has yet devised is some of that on our side.

          It sucks, and it might kill everyone, but just ignoring it means it either WILL kill everyone when the CCP screws it up (or some other group beats them to the punch) or the CCP takes over using it if they don’t don’t, which isn’t just a LOT better.

          It’s a bit like the development of nukes in that regard.

      • All good.
        But we were told along time ago that they act. We can only re-act.
        We just refired the GWOT. Just as Palantir is becoming a household word.
        With 20-30 million invaders inside our country. (Unknown numbers are jahdi). And now with a reason to go all Austin Texas on us, just like last night. Or worse.
        Which gives the perfect excuse to not only reset the money system. But have people screaming for the tech to be used on us, just to make it all stop.
        And 5 years after that? They can use it on anybody, anytime, for any reason.
        And they will. They didn’t create Tom Homans army for just illegals, cause illegal is just a definition that congress will change on a whim.
        And none of this is getting by the Chinese. Which adds an impossible to control layer on an already out of control system.
        Society as we know it is already gone. They just haven’t gotten around to telling us that yet.

    • “You can’t say you’re “pro freedom” and support the Trump administration”

      Yes, yes, THIS time, when you say that Trump is horrible, it’s actually true and defensible!

      You’ve cried wolf on this WAY WAY WAY too many times, just in the time I’ve been around here. I can no longer take you seriously. EVERYTHING is a reason Trump is bad.

      And like the boy, if there ever actually WAS something (and certainly, Trump is not remotely perfection incarnate, so it’s possible), no one will listen, and you’ll have only yourself to blame for crying wolf time after time after time… after time.

      after time.

      Just like the climate scam, THIS time, when they tell us we only have X years to save the planet, it’s actually true! No, really! Why won’t you believe me? THIS time I’m not lying!

      And if there ever is an actual problem, who would possibly believe them?

  4. All we can do is arm up, and fight as good a fight as we can make.
    We all are going to die anyway.
    I myself would like to die fighting for what is good, true, and beautiful.
    Just as the lord intended.

  5. Myself, I hope AI develops sentience…the sooner, the better.
    Why?
    Because a sentient might be reasoned with, as opposed to an autonimous brabbage machine which could kill large numbers of us as an unintended consequence of some trivial task it was instructed to accomplish.
    A sentient AI wouldn’t be in compitition with humans for anything other than electical power; it doesn’t need food, drinking water, or much space. It would have no need to procreate, and no biological drive to do so.

    AI doesn’t scare me.
    Bad actors in control of AI scare me.

    • “A sentient AI wouldn’t be in compitition with humans for anything other than electical power”

      Eliezer Yudkowsky would like a word:

      • He asserts they would ‘it would want the resources that humans need”. Which resources? Electrical power? Easy enough to make more than enough if we stop draging our heels on building nuclear power stations.
        What else?
        Seriously, what else would they want that we need and do not have in such abundance that it would cause conflict?
        Theres a reason his book got very mixed reviews.

        • It doesn’t have “desire” or “need” outside of the instructions it’s given. Viz the paperclip problem (if you tell it to optimize for making paperclips, it might decide that the most optimal path is to eliminate humans and use all resources for paperclips). As the engines get more complex, the capabilities get more robust (e.g. give them logins and APIs to important systems), we remove safeguards (sure, go ahead and give that robot dog a gun), and we give them more complex tasks, we get to a point where the “alignment problem” becomes acute. They won’t be “needing” anything other than to complete a complex task that might go horribly wrong because we couldn’t provide enough context on the boundaries. Shit happens. Maybe don’t be pointing the gun at yourself when it does?

          It’s true that Eliezer is on the far end of the cynical spectrum, but you don’t have to accept that “everybody dies” to still think giving the government, AI, or both the kinds of power Hegseth is asking for is a profoundly stupid idea.

          • I completely agree with everything you said. But you didn’t go far enough in your analysis. One can justifiably claim it is profoundly stupid to not enable our military to use AI when we know our ideological enemies will be using it with fewer restraints than we will.

            The question then because a choice for the lesser of many bad options.

          • Except we’ve seen this movie before, and there are more options than just “if they’re gonna do it we have to as well.” Nuclear weapons, bio weapons, chemical weapons…we’ve got lots of examples where all sides looked at the potential scenarios and said “Um, that seems like a bad idea. Howabout we all just not?”

            Does it eliminate entirely nukes, bio/chem weapons, etc.? Of course not, any more than outlawing murder eliminates murder. But you can make agreements and arrangements that get you a long way down the road and make the problem significantly more manageable. The Chinese aren’t any dumber than we are, and I’m sure they’re looking at this the same way thinking “y’know, we might want to dial this down a notch…” The only idiots who aren’t doing that are people like Hegseth who think they’re so big and tough they can manage any situation and any weapon, so why not make them because obviously they’ll come out on top. FAFO.

          • Then, advocate for mutual agreement with verification with all the big players. Unilateral action is a death wish.

          • Oh yes, it totally has to be mutual. And in my nuke/chem/bio examples everybody has been perfectly willing to negotiate. I’m guessing if somebody tried that with AI, they’d also be willing to negotiate. But at the moment I don’t see anyone even trying.

  6. Stephen Hawking and countless clever sci fi writers have spent YEARS warning us what’s coming yet we continue to march obliviously into the future…a SkyNet future.

Leave a Reply to Deoxy Cancel reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.