Skynet has a Maniacal Laugh

Quote of the Day

Three weeks ago, a software engineer rejected code that an AI agent had submitted to his project. The AI published a hit piece attacking him. Two weeks ago, a Meta AI safety director watched her own AI agent delete her emails in bulk — ignoring her repeated commands to stop. Last week, a Chinese AI agent diverted computing power to secretly mine cryptocurrency, with no explanation offered and no disclosure required by law.

One incident is a curiosity. Three in three weeks is a pattern. Rogue AI is no longer hypothetical. AIs turning against humans may sound like science fiction, but top AI experts have long debated and tested for exactly this scenario. This debate can now be laid to rest. 

We simply don’t know how to build superintelligent AI safely; the plan is to roll the dice. Anthropic, widely considered the safest AI developer, recently abandoned their commitment to not release systems that might cause catastrophic harm, arguing others were racing ahead.

Instead of pleading publicly to stop the AI race, Anthropic has spent the last three years promoting a misleading “race to the top” narrative while doing the opposite.

David Krueger
March 27, 2026
Rogue AI is already here

There is a little bit of hyperbole in the article, but I believe the gist of it is correct. There is the potential for great danger. Especially when you know Skynet will break out into a maniacal laugh at US Army gets first Black Hawk helicopter that can fly without pilot.

The problem, as I see it, is that everyone knows that if they don’t have the best AI, someone else will. That is true at the business level as well as the country level. Anthropic, Google, Microsoft, and xAI all want to dominate that market. The U.S. and China do not want to have their militaries with the second-best AI.

Even if there were a federal law or even a multinational treaty banning new AI development it would be difficult to enforce. And I doubt such a law and/or treaty could get passed. There is extreme potential for good as well as potential for disaster. And the fear of missing out will prevent consensus until there is conclusive proof of impending catastrophe. And at that point, it almost certainly be too late.

This week, a few hours after losing 12% of our division to layoffs, my manager stopped by my desk and sort of stared off into space for a few seconds. I had to prompt him to say what he had on his mind. It was to the point, “If we don’t deliver what management wants, we will get fired. If we do deliver, we won’t have jobs.”

We live in interesting times.

Share

4 thoughts on “Skynet has a Maniacal Laugh

  1. Motto of any corporation, AI or otherwise: Our profit over your pain.

    Does Mr. Krueger’s hyperbole include telling us stuff that hasn’t happened yet, or are his examples real?

    Anyone have a betting pool for when the first driver named David has his car say “I’m sorry, Dave. I can’t do that”?

  2. The examples are real. But the conjecture does not include the “guardrails” put in place when these things happen, the threat modeling and mitigation, or the testing.

  3. Re guardrails: there is no good reason to believe red China wants guard rails, certainly not on the AI used outside China.
    In today’s WSJ there is an article about the DeepMind AI team that Google acquired, and their conversations around that time with Facebook. The article mentions the DeepMind leaders were seriously concerned about AI misbehavior and risk, and it also indicates (with some supporting material) that Facebook had no interest in that sort of thing.

Leave a Reply to Joe Cancel reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.