Mutating Cyber Event

This could be “interesting”:

Cyber Apocalypse 2023: Is The World Heading For A ‘Catastrophic’ Event?

As the 2023 annual meeting of the World Economic Forum wrapped up in Davos, Switzerland, it ended with a disturbing prediction from one of the leading voices. Delivering a presentation on the 2023 Global Cybersecurity Outlook report, forum Managing Director Jeremy Jurgens revealed that 93 percent of those surveyed believe that a “catastrophic” cyber security event is likely in the next two years.

According to the WEF report, one of the biggest threats is a “mutating” threat. This could take the form of an AI-enabled virus that transforms as it infects various systems and organizations to evade defense systems or even detection. Prime Minister of Albania, Edi Rama, whose country suffered an attack that brought down critical infrastructure in 2022, spoke about what he had learned since:

“It’s about viruses that can not only block our way of living but can control it and deviate it. So it can use our systems like, God forbid, our air transport systems to hit us back. Imagine if there is a cyberattack on our air transport systems that turn a huge number of airplanes that are flying into bombs.

Skynet smiles.

I need an underground bunker in Idaho.


9 thoughts on “Mutating Cyber Event

  1. Generally speaking if something bad is possible it will eventually happen. Often because someone somewhere wants it too so they help it along. There are just enough perverse psychopathic people in the world to insure that someone is always out there seeking to make everyone else miserable. Modern technology and the internet simply makes it easier for them to accomplish their goals.

    • At some point, no one knows when, the proverbial woodpecker will come along and destroy civilization.

  2. I have no doubt the prediction is technically correct even if temporally inaccurate (it may occur sooner than two years, or later, but it will occur). How big a role AI will play in it is, at this point, anyone’s guess, but it will not be zero.

    I would suggest initiating bunker construction with all deliberate speed might be worthwhile, and offer that “deeper is better and much deeper is much more betterer.” All of which assumes that bunkerization is a successful long term proposition. Opinions vary.

  3. The idea that our various networks can be used against is not a new idea.
    Many of our networks are hopelessly unprotected; the most crucial one being the electrical grid. When the lights go out, pretty much everything else follows.

    The idea that Skynet could be used to crash airplanes into buildings on the other hand is pretty much organic fertilizer so long as there is a crew aboard. Air traffic control is typically supported by emergency generators and ATC is compartmented to the point where even though an airliner may not be able to land in Seattle there are other airports nearby.

    • I keep worrying about lots of networks that are connected to the bad Internet for no conceivable reason. An example, from a couple of years ago, was the various computers in some car (forgot the brand — Jeep?). A remote hack was perpetrated on its entertainment system, which isn’t too surprising given that this takes streaming content from the Internet. But the disturbing thing is that this in turn allowed the attackers to control the autopilot system (steering and speed control). The obvious observation is that under no circumstance whatsoever can any connection between safety critical systems and exposed non-critical systems be permitted (with the possible exception of controlled software update of signed software with properly controlled keys).
      I suspect that these sorts of failures are all too common. If you don’t bother engaging your brain first it’s easy to just connect everything; maintaining “air gaps” and managing them correctly requires more thinking.
      Something related: people at times talk about the problems of obsolete software, and it is true that finding maintenance expertise is harder. But “obsolete” computer systems have the wonderful advantage that they tend to be far more hack-resistant. Partly because non-PC systems as a rule are built with security-minded operating systems, and partly because script kiddie malware attacks against COBOL mainframes aren’t going to get you anywhere. (Come to think of it, this is likely one of the reasons why VMS systems are still alive and well.)

  4. Well the concept of Movie “Dress-Up Make-Believe” coming true is slim to none. It is amusing to see how some think it can.

    Automating systems has many risks if done incorrectly. The improper application of safety methods is one. It took industry a number of years to learn that safety is not an added layer but an all encompassing failsafe zone that a piece of automation was placed into. Failsafe being the key. Part of the failsafe system is the automation not controlling the failsafe system. The automation can shut down a system but human intervention is required to restart.

    The idea of an AI system finding a method to eliminate its demise can only happen if you first install the method and teach it to the automation, one can not be envisioned. This is “Dress-Up Make Believe” movie thinking.

    Teaching a system to first learn something then letting it use that lesson as a base to learn other things is very dangerous. Humans do this but we take more time, allowing more possibilities for intervention when things go bad and that can work sometimes. A computers ability to cascade bad decisions very fast is where the issue seems to manifest it’s self. It still needed to be instructed to do it.

    IMHO AI is the tech salesmen’s version of the “New and Improved” line used by dish soap salesmen.

    Machines can be many things intelligent is not one of them. We can write code that allows them to look smart but they still need to follow the code. Can a code be written to make a system very dangerous yes. My fear is some one will write a code automating some thing that should not be automated because they think they are in charge and get to.

    IMHO the term “AI” is an attempt to remove liability from the writer of the code by pretending the machine did it. “Dress-Up Make Believe” If to many believe it it could work.

    Best regards,


  5. Guaranteed. When one needs to reset the debt based economic cycle/ cover up the theft?
    They generally start a world war. But with Russia and China set to win the conflict. Clown world will be full swing in finding some way around that.
    Enter AI. And the excuse thereof. It would only take a week or two-three of grid-less, internet-less chaos to make the west-world mentally ready and willing to give it all up.
    Whatever you want. Just turn us back on. And give us some order. Pizza and beer again. No anchovies? I’ll have the extra spicy cockroaches, please.
    It would even give you cover to go do to the rural areas some of the crap you been wanting to for a long time. (Since you will control all the information.) We haven’t been able to get the grid back up in that area.
    And when it comes back up everything will need to be controlled. So it can’t happen again, Right?
    When WEF is talking about it already? It’s not a speculation, it’s a plan.

Comments are closed.