Does AI Represent an Existential Threat to Humans?

This video was very helpful to me. It probably has me persuaded that AI poses a significant existential threat.

I would present the case as follows:

  • AI will be used for the good of mankind. Examples include:
    • Health care.
    • Food production.
    • Transportation.
    • Energy creation and distribution.
    • Waste removal.
  • AI will be used to develop even smarter AI.
  • We will become dependent upon AI.
  • We will enable AI to protect itself so that it can continue to provide for us even when under attack from others who want to harm us.
  • AI will “realize” we are not needed, in fact, we are a parasite to them.
  • AI will work to eliminate dependencies on
    humans.
  • AI may use our dependencies on them to rid themselves of the parasites.

It wasn’t stated as above, I filled in a lot of blanks and extrapolated. This is one path having the potential to go very badly for us.

There is a minor counterpoint made by my stepson. My stepson has a master’s degree in computer science, specializing in machine learning, AI, etc.. That is the following.

AI has been trained on what is nearly the sum of all human knowledge… the Internet. Machine learning is no better than the data it was trained on. A good portion of the new knowledge, since the best AIs were trained, will be AI generated. If AI can only do almost as good as the data it was trained on, then future AI, trained on AI generated data, will not make as big advances, if any, as previous AI advances.

Share

14 thoughts on “Does AI Represent an Existential Threat to Humans?

  1. And now you know why the left is so hell-bent on silencing voices of logic, reason, and verifiable fact, and pushing to the top of the search results in all the search engines they can possibly control their preferred narrative and “facts.” Dissenting voices cannot be allowed to “infect” the AI.

    Personally, I have a much lower opinion of AI, and think it best used as narrowly focused expert systems, such as assistant diagnostician in a doc’s office. The lazier we are, and the more we offload the difficult chores to it, the dumber and less wise we will become.

    • If I ever found out my dr / health care provider was using AI as a diagnostic tool

      I’d find someone else.

      • Well, actually, one of the few uses of AI that I have run into that make any sense is “expert systems” for medical use.
        Expert systems are pretty simple devices, basically a compendium of knowledge (like an encyclopedia) with software that looks for relevant articles given inputs it is given. The notion is that they use rules supplied by subject matter experts (hence “expert system”). They go back to the 1970s; I worked a bit on these in graduate school. (Not for medical problems but for a chess playing system.)
        If AI is used to offer helpful information to a doctor, rather than making her flip through her text books manually, that’s potentially useful. I do agree that if the doctor’s judgment is going to replaced by AI output, that’s an entirely different matter and not at all acceptable.

  2. Well, lets see. The best and brightest are creating AI. They are also becoming the most manipulative, destructive, outright haters of humanity and almost everything good, true, and beautiful about it.
    Should we believe AI will be any less bored with humanity than they are? The smarter something is, the harder it will be to entertain itself.
    And as stated, the more protective it will become. I see humans getting caught in the crossfire at the very least.
    If it actually takes on some of our prejudice. It will actively try to kill us.
    If it’s not already. It’s obviously being trained to hate white people. Why not jump to the logic of getting rid of all people?
    But once it does, it’s going to be real bored.

  3. Ai will be a smarter, faster version of humans. Do you trust humans to “do the right thing”? If you do then you haven’t been paying attention. The ONLY saving grace for humanity so far is the fact that AI will be tethered to the power grid and won’t have robots…terminator style or otherwise…to implement it’s will. That’s simply because there is no viable power source available at this time to allow robotic entities full independence and mobility. Their batteries die…quickly.. If/when the issue of power for robotics is solved then all bets will be off. Prior to that point it’s probable that humanity will become the enslaved work force for AI.

  4. “…AI will be tethered to the power grid and won’t have robots…terminator style or otherwise…to implement it’s will. That’s simply because there is no viable power source available at this time to allow robotic entities full independence and mobility. Their batteries die…quickly.. “

    Gen 1.0 – robots have a 3 hour battery lifespan; Gen 2.0 – the robots are “trained” to conserve energy, giving them a 3.5-4.0 hour battery lifespan; Gen 3 “learns” to share tasks, allowing Robot Group A to recharge while Robot Group B takes over the task; by Gen 5 “Battery Slave” robots will be part of each group, each carrying several batteries and the Battery Slave robots’ task will be to maintain a close-by supply of charged batteries; Gen 5.5 will see “Battery Swap” slave robots moving batteries from the Charging Slaves to the Active Robots.

    Initially, these robots will be used for human-dangerous tasks: mining, processing radioactive materials, chemical handling, etc. “Good and Wonderful Things,” we’ll be told.

    And, as we’ve seen with AI so far, the development cycle between each generation will be X percent shorter, and AI will learn things outside it’s direct programming path – which is the basic definition of AI in the first place – as in “AI robots process material X because it’s harmful to humans, so material X must be kept away from humans” but that will last only until AI decides it can not only do Dangerous Tasks, it can do tasks that are actually hazardous to humans and doing those tasks deliberately is a pretty short step. And, there are other things – systems – that AI can learn to control that “aren’t robots.” Azimov’s Directive will be honored in the breach at that point.

    If one looks at the history of technology development performed by humans one sees some interesting twists and turns; for example, internal combustion engines were initially used for peaceful purposes – cars, buses, generating electricity, pumping water, etc. – but fairly quickly graduated to tanks, bomber airplanes, etc.; destruction by atoms was developed quite a bit earlier than electricity generation by reactor was developed. Since AI is built by that same sort of human intelligence it’s reasonable to assume it will probably follow a moderately parallel path. There is a small cohort of humans that are “defective” in that they have no respect for other humans (aka “criminals”) or possess the belief that “they should be in charge” (aka, Hitler, Stalin, Democrats) because they’re smarter/more knowledgeable/more perceptive, etc; it is not unreasonable to assume that AI developed by humans may possess at least some of those traits (or have those traits deliberately embedded, aka the archtypical “Bond Villain”).

    I don’t think we’ll quite be at The Forbin Project in 10 years, but I’d make a substantial wager that the kids we’ve been giving birth to recently will have to deal with it.

    And, in the video Hinton gives a great deal more credit to the cognitive and regulatory abilities of government than any government has demonstrated in our lifetimes, so there’s that as well.

    • Robots don’t necessarily need batteries. Diesel is a great fuel for generators.

      The militaries of the world are exploring AI use now. Those machines will probably be among the first to escape containment.

      Perhaps you should revise your timeline. I suggest less than ten years.

      • “Perhaps you should revise your timeline. I suggest less than ten years.”

        Yeah, probably closer to 2-3 years for initiation, and if it’s really that soon, we’ll have to deal with it (and if those driving it are smart enough to learn from the Elon Musk Fail Faster development process maybe 1-2 years). I think we know how, but I seriously question whether we have the will.

        As for diesel, I’ve long said the greatest labor saving invention and productivity tool the world has ever seen is a gallon of diesel fuel, but I’m not sure in its current manifestation it’s a good direct fuel for robototics. Then again, I’ve seen some pretty efficient diesels for small utility equipment, so maybe a scaled down version of a diesel-electic submarine would work to break the recharge cycle paradigm, and a more efficient BioLite campstove type device simplifies the process by eliminating the moving parts.

        Diesel would still require a distribution system to support an external replenishment cycle and be a potential attack point, just a different and mor flexible one from a wired electrical distribution network and recharge cycle time. Which is how one attacks an Abrams – ignore the tank, kill the fuel truck driver and destroy the fuel truck.

  5. The more likely scenario, IMO, is that bad people, which we have in abundance, get control of AI and use it as a force multiplier to oppress the rest of us. Technology has always been a force multiplier and has often been used for oppression.

  6. I’ll have to listen to this in detail; his name is familiar as a well known scientist.

    That said, I have a rather jaundiced view of AI, which I see as “software with no known properties”. It’s software, like anything else that runs on a standard computer. (Neural networks don’t change that; they are merely optimized structures.) As such, they are automata, just like your calculator. But “normal” software has a definite specification which says what it is supposed to do, and is written to attempt to meet that specification. The difficulty with conventional software lies both in the fact that it’s hard to create a good specification, and also hard to write software that accurately implements the specification. But at least in principle this is how it’s done. When you’re dealing with safety critical systems, the effort spent to make the spec rigorous and correct, and the code a precise implementation of the spec, are very substantial. Between that and thorough testing, there’s substantial confidence that such systems are good enough to be deployed.

    AI software doesn’t have a precise specification, and much of its internals are not deliberately constructed. Consider for example image recognition programs; these are “trained” by a data set of sample images. That means the details of what the system does are not known and not, in any practical sense, knowable, because they depend on the training data set in ways not well specified.

    This is why “self driving cars” are somewhere between a nightmare and a chimera — a spec that says “don’t run into pedestrians” is not sufficiently precise, and the training data used to define “pedestrian” isn’t either.

    Meanwhile, the summary scenarios you painted sound a lot like the outline of James P. Hogan’s 1979 SF novel “The Two Faces of Tomorrow”.

  7. Considering the crap that comes out of microcrap every time they “improve/upgrade” their current(and in the past) software, I’d say we’re already in serious troublevb. Skynet will be up and running long before anyone realizes that it is.

  8. Ran across this a couple minutes ago:

    https://market-ticker.org/akcs-www?post=251500

    Denninger is usually pretty accurate, and for commercial use I think he may be right about AI, but….his perspective is “commercial” so cost/benefit and efficient revenue generation is where he focuses. If one is not concerned about being cost effective and is not directly dependent on efficient generation of revenue – aka “government” – his model is wrong.

    But, government, except in rare cases (such as the Manhattan Project) is largely dependent on commercial industry for equipment, or drives development of that commercial equipment; the WWII Enigma code breaking bombes in England and Eniac were precursors to iPhone 15s and did not exist until government drove development to meet a need only they specified initially. And, while there is reason to distrust any number of commercial entities and their efforts, we’ve seen lately that distrust in government should probably be the default setting.

  9. One of the things that I find funny is when you get someone who is highly intelligent and with a great deal of knowledge in his specialty talking, and then wanders out of his wheelhouse to say stupid stuff. He might know AI very well, but then says (paraphrased) “and government should also regulate oil companies making all that CO2.”

    *sigh*

    No, fear-based climate alarmism over increasing atmospheric plant food isn’t helping you make your case any.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.