AI and Cyber Security

Quote of the Day

For decades, one of the biggest factors that would limit the ability of attackers to target companies has been the lack of resources. In other words, they simply didn’t have the time, talent, or ability to look everywhere at once. It’s not a secret that if you look beneath the surface, every single company is a mess on the inside, but because of how complex the environments are and how much time it takes for attackers to do reconnaissance, oftentimes what actually keeps companies from getting breached is the lack of resources on the attacker side.

With AI, that is soon going to go away. Attackers are not bound by corporate governance or acceptable-use policies deciding which models can or cannot be deployed. They will use every model available, every autonomous agent, every form of automation that allows them to enumerate infrastructure, map dependencies, generate exploits, and test hypotheses at a scale that was previously impossible. The cheaper LLMs become, the lower the cost of attacking will be, and the higher the volume of attacks is going to become. This shift is going to fundamentally change the economics of defense. When attackers gain near-unlimited reconnaissance and experimentation capacity, companies won’t be able to rely on reactive security. Very soon, hoping that vulnerabilities and misconfigurations remain undiscovered will stop being a strategy.

Ross Haleliuk
March 3, 2026
Anthropic won’t kill cyber, but it may kill some companies

My manager walked over to my desk today and said, “We are putting together a ‘tiger team’ to work on a grand plan for reshaping how we do cyber security at <company name>. How do we restructure the way we work in an AI world? Would you like to be on that team?” My immediate answer was, “YES!” He started to tell me a little about what he had in mind. I reached across my desk and picked up a heavy plastic object and showed it to him. “What is this?, he asked. “This”, I explained, “Is a patent I got over three years ago for what I think you are describing.”

Our first ‘tiger team’ meeting is tomorrow. I’m looking forward to it.

A couple of months ago I was talking to a Cyber Security Analyst friend at Mandiant (formerly, they were purchased by Google a few years ago). We talked at AI at length. It is very disruptive for cyber security. I asked, “Will the defenders or the attackers benefit the most from AI?” His answer was, “The attackers. There just isn’t any real doubt about that.”

Perhaps he is right. But I know the defenders can put up a good fight. Probably the biggest obstacle is that large corporation have difficulty moving fast. AI is exceedingly nimble and corporations with petabytes of daily data to manage have a tremendous amount of inertia. For all intents and purposes, the attack surfaces are stationary compared to an AI attacker.

Suppose a single evil AI or a skilled nation state compromised all major infrastructure and went for maximum destruction. The amount of damage done would boggle your mind. For a starter, imagine almost no electricity or communication, with zero water and waste disposal. Equipment is not just shut down, it is destroyed. Natural gas lines are not just turned off they are over pressured and ignited. Sewer systems are not just stopped. They pump sewage into the streets or even into buildings. Refineries have “high energy events.” The water behind dams is released in a manner to breach downstream dams. Self-driving cars turn into land-based Kamikazes. Cell phones batteries explode. There are 10,000 airplanes crashing into buildings in hundreds of U.S. cities.

If something connects to the Internet, it becomes a weapon.

We live in interesting times.

I wish my underground bunker in Idaho were complete.

Share

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.