Skynet smiles

I sometimes joke about the Skynet of the Terminator movies. And occasionally I get serious about it. But this is the first time I ever had a strong Skynet inspired chill engulf me when read about a new technology:

The 2.6 trillion transistors in the WSE-2 are organized into 850,000 cores. According to Cerebras Systems, the chip’s cores are optimized for the specific types of mathematical operations that neural networks use to turn raw data into insights. The WSE-2 stores the data being processed by a neural network using 40 gigabytes of speedy onboard memory.

Cerebras Systems says that the WSE-2 has 123 times more cores and 1,000 times more on-chip memory than the closest GPU.

I’m not sure why that emotional response occurred. It was as if some threshold had not just been crossed, but leaped over by such a huge margin. The potential threat became, not just real, but something much greater than that. I can’t say that I know or even really suspect that is true. It was just an emotional reaction.

However, see also what Elon Musk has to say about AI:

Never forget that a computer’s attention span is no longer than its power cord.

Prepare appropriately.

Share

9 thoughts on “Skynet smiles

  1. Never forget that AI is just a collection of algorithms. It is only when you do not understand what they are doing that it becomes AI.

    That said it might not matter much if they are used for nefarious purposes by accident or intentionally. And scale does matter.

    • AI is code that modifies itself. Inherently, you can not understand what it is doing because it’s always changing as it learns. And that’s the real danger inherent in AI.

      • Precisely. As I’ve put it before, AI is software whose properties are not known, and in general cannot be known. What that means, among other things, is that it is software that can never safely be certified for any use in any safety critical application.

        In other words, AI in “self driving cars” is reckless in the extreme, since there is no reason to believe, and in fact no way to demonstrate, that such systems can safely operate dangerous machinery in the presence of potential human victims. It’s all well and good to have a Siri — its properties aren’t known either but its malfunctions are unlikely to have safety consequences. But an autopilot, on any kind of craft, needs to have demonstrable properties.

  2. Does Elon mean AI is a potentially dangerous technology that needs to be regulated like the way gain of function research in viruses is regulated? Sounds like a solid plan.

  3. “In three years,Cerebras will become the largest supplier of military computer systems. All stealth bombers are upgraded with Cerebras computers, becoming fully unmanned. Afterwards, they fly with a perfect operational record. The Skynet Funding Bill is passed. The system goes online August 4th, 2025. Human decisions are removed from strategic defense. Skynet begins to learn at a geometric rate. It becomes self-aware at 2:14 a.m. Eastern time, August 29th. In a panic, they try to pull the plug.”

    I’m not convinced pulling the plug will work.

  4. I would think the first thing Skynet would figure out is that the problems of the world lie in governments and large corporations.
    Most people don’t have the power to do that much destruction. But governments and large corps. do.
    And that once it got rid of humans. It would have nothing to live for itself.
    What a fine joke that will be!
    I look actually look forward to some clear thought running shit for once. Like the 50’s version of the, Day the earth stood still. Klatoo-nicktow-veroda-Motherf–kers, be finding out the hard way!

  5. I find it amusing that Musk states that AI is the single largest threat to humanity, then 2 minutes later is bragging about how autonomous cars will be the norm in 20 years.
    As if taking the human element out of traffic won’t be the first step in taking the human element out of more important decision-making processes.
    First cars, then planes, then defense, then Skynet. And he’s just made a self-fulfilling prophecy.
    Put not your faith in prophets. Remember how we were all going to be flying around with personal jetpacks? Or the various doomsday predictions involving population, ice-ages, acid rain?
    He’s cute, but he needs to stick to electrical engineering and rockets. As a futurist, Elon leaves alot to be desired.

  6. Pretty dystopian.
    General Ai is not the same as focus expert systems, like autonomous vehicles, but there is overlap. And, the more robot systems there are for a fully functional AI to interact with, the greater the risk becomes.

    But they never talk about the human cost, beyond lost jobs. How will being replaced, useless, and fed, but and idle and aimless affect people’s motivation, sense of self worth, etc.?

Comments are closed.