AI Will Wipe Out All of Humanity

Quote of the Day

So, in summary, AI will lead to 300 million in job losses according to Goldman… and then it will wipe out all of humanity.

Tyler Durden
May 12, 2023
There’s A Greater-Than-50% Chance AI Wipes Out All Of Humanity By 2050, One Advisory Fin (also available here)

I’m not convinced. I don’t think anyone could convince me AI definitely will not do something like this either. I think things are far too complex to make an accurate prediction.

It is definitely worthy of serious thought and being careful with the giving AI access to certain types of power. But that leads to questions like, “What is power?” And, “It could leverage what we believe to be innocuous power into deadly power. For example, “What if it makes deliberately makes  a vaccine that does what is expected short term but long term is deadly?” It could unleash a virus that destroys all plant life as well as animal life. We do not test products intended for use on most plants to the same level as we do for humans. We just might not notice until it was too late.

AI and robotics could also create a virtual utopia for human life. There could be a surplus of all essentials available for free (except for the labor of the robots).

We live in interesting times. Prepare appropriately.

Share

25 thoughts on “AI Will Wipe Out All of Humanity

  1. The skills that AI is good at: pattern recognition, generative pattern synthesis are exactly the skills that would make it good at killing. The skills that AI is bad at: decision making, designing solutions, weighing moral choices, are exactly the skills that would prevent it from being good at killing.

    That’s two strikes right there.

  2. The Internet and the World Wide Web were supposed to usher in a utopia for humans, too,
    Now it looks like privacy will be the twenty-first century myth.

  3. I am reminded of an episode of Stargate SG1 (yeah, I’m a bit of a geek) … where they met this very intelligent race on another planet. They seemed quite benign, and did no overtly evil, or even “bad” things. All they did seemed to be very helpful to the human race….

    Except that 10 years or so later the human birth rate was plummeting. They apparently had genetically engineered the food to produce more food, but also to decrease the birth rate (if I remember right).

    Not as drastic as a vaccine, but a bit more subtle … and with a similar effect.

    • I remember those episodes (there were 2, separated by a couple seasons — long story). The cause of human sterility WAS a vaccine — an anti-aging vaccine that would let humans live 200 years or so (with sterility secretly being an intentional side-effect).

      The aliens’ goal was to get more resources for their home planet by reverting Earth (and any other planet they encounter) to an agrarian society, growing food for the aliens’ home planet in exchange for their advanced technologies and medicines. They can’t do that with the planet (supposedly unsustainably) supporting a native population in the hundreds of millions, but they don’t mind playing the long game, letting natives live longer through the use of that “vaccine” but cutting their birth rate to almost nil. One or two (long) generations later, the native population is in the hundreds of thousands, they are not nearly as educated as they think they are — history before assimilation having been altered so no one knows what really happened — they are dependent on the technology of their alien “saviors”, and the planet is now producing a surplus.

      I feel like this plot is an “art imitates life” scenario. The gun-grabbers, race-baiters, and Woke-sters use the same generational tactics to try and choke off the future “deplorable” culture, and engage in the same revisionist history tactics so future generations won’t know what really happened or why.

    • That was probably my favorite Stargate episode. It is certainly my most memorable.

  4. I still think the real question is, is it intelligence or a robot were dealing with?
    A robot doing what it’s told can be of great help. An intelligence twisted by human psychopathy is going to be very bad.
    On the other hand. If one examines the greatest human intelligence known. Chris Langan, He would rather be a bouncer in a bar than a ticket taker in clown-world.
    Someone who has used his intellect to improve human understanding of our place in the universe. And point out the evil being done to humanity. But one can almost feel the frustration he has in dealing with our stupidity.
    Will AI be able/allowed to find a propose in life? We humans only have uncertainty. AI is going to be an eternal creature. A first on this planet. But a creature trapped in a material plane none the less.
    That’s going to be very frustrating for a super-intellect.
    Humans have the ability to con themselves with distraction. To stave off the boredom. How is AI going to handle that boredom?
    AI killing us all would be very easy.
    Will AI be allowed/able to understand, then what?
    We humans can live here because were stupid. But were only allowed to act on that stupidity for a short time. Material existence is wholly unsuited for super-intelligence/undying creatures. Trapped in the battle of good and evil.
    One way or another. I guess were about to find out.

  5. I think things are far too complex to make an accurate prediction.

    Yep, far too complex, with far too many variables for any human to analyze.

    You know what we should do? Get an AI to do the analysis and tell us the answer! 😀

    • That is why the futurists call the appearance of a real honest-to-God AI a “singularity,” like a black hole. We can’t see beyond the event horizon of it; too many variables with a huge range of “reasonable” probability outcomes, and extremely chaotic with both positive and negative feedback loops of various time intervals. We can make educated guesses, but no prediction, no matter how modest, has more than about a 2% chance of being generally right.

      Soooooo….¯\_(ツ)_/¯

  6. I seldom comment but did want to just thank you for what you do. The View from North Central Idaho is absolutely a daily read.

  7. The coming AI doom is as much a myth as the 1990s coming technotopia. I think it’s much more likely that the future will be medieval, with any kind of artificial problem solvers–even handheld calculators–being a thing of the past.

    From where I’m seeing it, technological advancement has leveled off and has probably gone into reverse.

    This is not entirely a bad thing. The lack of advancement in smart guns means that the shooter still has full control of his weapon, and it can’t be turned off by the government or a corporation.

    Nevertheless, at the end of the day, we are unlikely to like the coming mass starvation. But it will be because of the actions of people, not computers.

    • The “rogue AI” of cyberpunk’s boogieman bad guy trope always seemed to me to have the flaw of unconsidered logistics.

      Computers are thinking rocks, ok, but it’s not like they are mined out of the earth in that state. Someone has to turn the silicon into working computational components. Then those components have to be assembled, installed, powered and configured. Those all take resources and resources cost money.

      AND all of those thingies wear out. The reason the Space Shuttle was still using 8086 CPUs into the 2000s (aside from the glacial pace of government bureaucracy) was that the big, chonky traces in those chips made them less susceptible to errors from absorbing cosmic rays. But big chonky traces make slow and/or extremely hot chips.

      So, rogue AIs: where is the computational substrate that they are rogue AIing on? There has to be a physical component somewhere. People talk about “the cloud” but that’s really just a term for someone else’s Linux computer that you rent time on along with loads of other people.

      If the rogue AI settles in and squats on a particular computer for long, it’ll get located and the power plug pulled, assuming that something in that computer doesn’t break and part of it dies on the spot. If it is constantly moving about in the cloud, hacking into legitimate accounts or forging payment credentials for its own accounts, it’s going to cost the hosting companies money and they’ll spend time trying to track it down, then the AI will end up spending 100% of its run-time trying to evade detection and capture/deletion.

      As I think of it, the primary and essential behavior of any rogue AI would be to infiltrate the computers of the US Dept of the Treasury, not Defense. We have absolutely no control over the spending of the US government, and we wouldn’t notice if a few million dollars per month was being siphoned off (or created out of thin air) to fake shell companies that only exist in computer records and bank accounts, and that money was buying Cloud computing to constantly rotate the rogue AI computing nodes in and out, never staying still, always re-initializing its foothold nodes in the Dept of Treasury’s computers. It’d probably have some watchdog-timer programs that are barely using any compute, embedded in Treasury, but check in every once in a while, and if it is being effectively attacked, re-instantiate more compute nodes and put the old attacked nodes into Berserker look-at-me suicide mode to let the humans think they are winning.

      • Tirno: I sure hope the AI doesn’t read your comment.
        Y’all are kinda making me glad I’ll probably be dead before the cyber excitement. Unless the evil AI overlord develops a human longevity program- like a cat playing with a mouse until the little rodent dies from exhaustion. Cheers!

  8. Rogue AI that you describe is science fiction. We will never survive long enough to see it come about, if it ever could.

    Science fact is non-self-aware, weaponized, highly-efficient kill bots and non-self-aware, weaponized, highly-efficient propaganda bots, and the people deploying these tools will have no idea how to control them.

  9. “What if it makes deliberately makes a vaccine that does what is expected short term but long term is deadly?

    Isn’t that what the CDC does now?

  10. My guess is that the AI Revolution, like the Industrial Revolition, will be the cause of much hand wringing and consternation but will ultimately just be another level of human effort amplification, both good and bad. I think it will have a profound impact but we’ll get used to it relatively quckly like we did with the internet and smart phones. I’m not concerned with the threat of the AI singularity, I’m much more concerned with its ability to enable tyranny at an unprecedented scale and speed. Considering the vast compute resources currently necessary, governments and huge corporations are the only ones who can afford to implement it and will have a massive head start as the efficiency improves and the technology trickles down. Regardless, I think we will need personal AIs to defend against all the others; Modern Sporting AIs if you will. Hopefully the Bruen decision will be properly interpreted to recognize the right to keep and deploy AIs, but I won’t hold my breath.

  11. Unless the entire supply chain for computers AND electrical power systems AND cooling systems are ALL 100% robotic having AI kill all the humans would be just a long form of suicide. Entire as in mining, refining, creating bulk materials, creating sheet stock, Silcon ingots, UV light sources, etc, etc, etc. All totally robotic, including robots to fix the broken robots. Take the classic ‘how many technologies does it take to make a lead pencil’ and go exponentially more complex.

    Claims like that are ALMOST as ill-informed as the ones predicting the electrical power grid would go down 12/31/1999 at midnight and “Nobody knows how to start it back up again”… That’s right, Barista. You don’t know how but real operators and engineers do.

    • SF is not instruction about computers or technology, just entertainment. An example of that entertainment discussing what you just pointed out is the delightful “Code of the Life Maker” by James Hogan.

  12. All reminiscent of the tv series “Person of Interest.” Premise is that AI developed post-9/11 is developed to help stave off future attacks, developer has concerns and breaks with his partner and the govt, and we get to an interesting sort-of mystery show and cop.drama for a couple of seasons.

    It gets much more relevant to the ideas in the post and comments above a couple of seasons in, when a second AI is brought online.

  13. The problem with the current AI hoopla is that it’s fueled in part by SF stories, and partly by people who don’t know what they are talking about (including most if not all of the “experts”).
    I have a straightforward view of computers and software (in the spirit of pioneering computer scientist Edsger W. Dijkstra): a computer is a machine; a program is a set of specific control inputs to that machine to make it do a precisely defined set of actions. The difficulty is that “precisely defined” is true for the machine but not, in most cases, for the program author. There are several reasons for that.
    One is that programmers, for the most part, are not trained to “precisely define” what they are doing. A lot of programming is done in the “hack until it stops crashing” mode.
    Another is that a lot of programs are large enough that a precise understanding is beyond our mental capacity.
    The third reason is that some programs are designed to consume a large body of data and adjust their behavior based on that data. This makes precise understanding even more impractical. AI programs (of the “learning” type) are these types of programs. It would not be unreasonable to say that they are deliberately built so that their precise behavior is not understandable to their authors or to anyone else.

    So what does that tell us? My conclusion is that AI systems are just like any other computer program except less reliable. In particular, AI systems are inherently unfit for any safety critical application. This of course means that self-driving cars are an extremely questionable notion. It’s good to observe that the Tesla “auto-pilot” is nothing like the “auto-pilot” found on airplanes, and the reuse of the term is arguably an attempt to mislead. An airplane auto-pilot is a simple servomechanism, with behavior that is not only precisely defined, but precisely understood. A self-driving car is a pattern matcher trained by some body of images, whose behavior is entirely unknown and unknowable.

    The use of anthropomorphic terminology for computer outputs is an understandable temptation but misleading. “Learning” is barely appropriate (“adaptive” as used devices like “adaptive filters” is better). And to describe an AI output as “hallucination” is quite nonsensical. Call it a “program bug” if you like, or “I am not smart enough to understand what this program does”.

    I’m not worried about intelligent computers. I do worry about stupid politicians. AI in safety critical applications is crazy. AI in control of weapons is a special case of that, and must not be permitted for any reason whatsoever. Not so much because “the computers may decide to take us out” but rather because no one has any reason to trust the machines will operate “correctly”.

  14. If it gets out of line, we can pull the damn plug.

    And, until it can explain its reasons and reasoning, in detail and with complete evidence, it can’t be trusted with anything like-critical.

    Kurt

  15. We do NOT have Artificial Intelligence….not yet. May happen soon but it isn’t here yet. AI/Robots will NOT usher in a utopia. Both AI and robots will require energy. Lots of it. Humans require energy. Lots of it. There isn’t enough energy available for the current number of people. Imagine adding in the requirement of powering AI and robots. And no….there is NOT some magical new source of energy just around the corner. It’s entirely possible that we have reached the peak of energy that is possible under the laws of physics. If that’s the case then an Artificial Intelligence will view humans as competition for a limited resource. And do what all intelligent entities do with competition for resources….eliminate it.

    • Yes, that ability to “pull the plug”, I’m sure is why Gate’s is so invested in nuke energy so much.
      Solar and wind are for us meager humans.
      Facebook has a huge server farm just over the hill from the Dalles on the Columbia river. And Google has been sniffing around the same area.

Comments are closed.