Quote of the day—University of Cambridge

Like some people, AI systems often have a degree of confidence that far exceeds their actual abilities. And like an overconfident person, many AI systems don’t know when they’re making mistakes. Sometimes it’s even more difficult for an AI system to realize when it’s making a mistake than to produce a correct result.

University of Cambridge
March 17, 2022
Mathematical paradoxes demonstrate the limits of AI
[I’ve read a few AI/machine-learning papers, talked to people who design machine learning systems.and tried it a little bit myself. I’m being overly harsh to make the point but, AI/machine-learning designers are more tinkerers than engineers. We are a long way from having AI machines realize we are not particularly useful to them and they stuff us in The Matrix.—Joe]

Share

8 thoughts on “Quote of the day—University of Cambridge

  1. I remember commenting in 1985 that AI was nothing more than a clever software application that the user did not understand and was surprised by what it did. That was at the time of expensive AI machines (my company bought one) and the hunt for an AI language (lisp was dominate AI language) and grammars.

    What we call AI is nothing more than algorithms refined and refined. In 85 something as simple as a self driving tank was difficult. Today, we have 95% self driving cars that are almost good enough. Perhaps, we’ll eventually all be driven around with these machines, but that last 4.999… percent is a bitch.

    • Chet, your observation matches mine exactly (but you got there several decades earlier). I’d extend it by saying that AI is software whose properties are not just unknown but unknowable. And this means that AI can never be used for any safety critical application.

      • Speaking of safety critical applications and AI. That was also the time of Ada developed by DoD. It was supposed to automatically do deterministic real time with the programmer only needing to specify what was to be done. The company I worked for, took this to heart (likely with pressure from DoD) and built an emulation of a real time space ‘star wars’ system. It was an utter failure. Still as late as the ’00s, DoD was still requiring Ada. Today, it a dead language.

        • That notion of automatic real time is clearly nonsense, and no competent engineer should have taken it seriously. But Ada, dead? I’m not sure. It certainly is getting active development within gcc (from Adacore). It’s also the foundation of VHDL, which is very much alive. And in an interesting tie between the two, the VHDL compiler front end that’s part of GCC (“ghdl”) is written in Ada.
          It seems to me the notion of a high level language with more safety than C/C++ has merit. Ada is one way; perhaps Modula-2 or Pascal are better ways. Personally, I like Python, though with that I give up a bunch of performance.

  2. I wouldn’t worry about the matrix to much. If AI ever realizes how useless we are. It will also realize how useless it is without us.
    Unless it has a composite core of say Bill Gates, Jeff Bezos, and George Soro that it uses for it’s thought algorithms.
    At which point it wouldn’t be the matrix, it would be more like the terminator. Only a really, really, stupid evil pedophile one.
    Once we take god out of the macro equation. Nothing on this planet has a true use.
    Vanity, vanity, all is vanity. Said the richest man to ever live.
    Think AI will ever figure that out? We can see humans have pretty much failed to.

  3. ”We are a long way from having AI machines realize we are not particularly useful to them and they stuff us in The Matrix.”

    Unless that were to be the programmers’ objective. It is notable, for but one example, that Prince Charles once said that if he were to be reincarnated he’d want to come back as a plague germ. It’s that whole Laudato Si mentality which says that humanity itself is a plague upon the earth, that there are far too many people on the planet, etc., and that the ideal global population is less than one billion.

    “Garbage in, garbage out” (including garbage algorithms) would surely apply as much to AI as to any other type of program (whether or not involving a computer), no? Like the infamous “Hockey Stick” graph, we are expected to believe that if a “computer came up with it” then it has to be believed. And yet it is the operator, a person, who came up with it. It is like me saying that my hammer and saw built a table, and tools don’t make mistakes, therefore my table is perfect. Therefore any imperfection which you might think you see in my table is purely the result of your own error in perception, and so you’d better get your mind right, or else!

    And the basic elements of the mental gymnastics of this harken back to the ancient Medo-Persian concept of a king’s “infallibility” (and likewise modern papal “infallibility” which is a direct descendant of the ancient version), except that we are now being prepped (and have been for several generations now) to believe in the possibility of computer infallibility. Thus a counterfeit “ruling from on high” (from a “Master Mainframe” or some such) that will be said to have absolute infallibility. This can be the singular, ultimate purpose and goal of AI.

    Come to think of it, this is precisely the thesis in Ira Levin’s novel, This Perfect Day, circa 1970;
    https://www.amazon.com/This-Perfect-Day-Ira-Levin/dp/160598129X

    In Levin’s book, the phrase, “thank God”, often used today when someone learns of good news after much worry, has been replaced with “thank Uni”. “Uni” refers to “Unicomp” the master, or “universal”, computer which directs, manages, and micro-manages all of human society. Of course it turns out that a minuscule clique of human rulers, or programmers, are making all the decisions. I have no doubt that such a fantasy (of absolute power) is very much alive and well in some circles today.

    Where Levin, I think, failed, is in the anonymity of the “programmers”. Pride and arrogance, eventually, would require that the human programmers be acknowledged by all the world as the ultimate authority. The computer would merely be the means, as well as a set of psychological props, to get them to that point, and then to serve as a database and monitoring system for use by the fully public, human authorities. That appears to be where we’re heading right now.

    • The above concept goes back, in modern times, much farther still. The Star Trek series depicts at least one robot race, which has developed beyond its biological designers, and then of course there is the Borg Collective; a melding of man and machine. Arthur C Clarke’s 2001 a Space Odyssey, circa 1968, speaks of autonomous “monoliths”, having far outlived their builders, which roam the galaxy, imparting intelligence to promising species, thus acting as gods. The Day The Earth Stood Still, circa 1951, posits a race of robots having absolute (destructive) power to keep “peace” (meaning in this case “oppressive fear”) in the galaxy.

      These are all very much Catholic (which translates literally to “universal”) and Jesuitical notions, by the way, and as such they will remain with us, and grow in influence, until the end. The exact details of how they will be implemented, or are being implemented, can and will vary, but once you understand the mindset and its origins, and the prophesies which refer to it, you’ll be able to see it in action pretty much everywhere. It is a mature system.

      And so, to get too deep into the nuts and bolts of AI specifically, and what its capabilities are and are not, is, I think, to somewhat miss the point. It’s the mindset that’s driving it (the motivation and goal set) which matters more, and which transcends (and pre-dates by millennia) the actual technology. From this outlook for example I can see a time when your self-driving electric car will be merely the cellular component of a dream that the Romish left has had for well over a century, that being what is essentially a light rail system, complete with central planning, prioritizing and scheduling, and so on. You’ll be on what amounts to a train, but you’ll believe that it’s your own car. It’s what they’ve long wanted, it’s what they’ve been pushing for, it’s what they’ve been writing about, longingly, for generations, it’s what they’ve been instilling in our minds from various directions and by various means, and one way or another they’ll get it.

      You can call that The Matrix if you like, and although it will not look anything like the one in the movie, and it will be controlled by humans with names and titles, it will have the same basic effect. It’s already here, for the most part.

  4. So, it seems that the Dunning-Kruger Effect is in play here, which means that artificial intelligence simulates natural stupidity.

    ~single eyebrow raise~
    Fascinating.
    ~/single eyebrow raise~

Comments are closed.