Quote of the day—Rolf Nelson

This tendency of AI to speak “racist” or “problematic” things is nearly 100%. As someone who has thought about AI, and written about it, I find this humorous. It is almost as if none of these people being offended consider the possibility that the AI is correct.

Rolf Nelson
March 21. 2021
Racist AI
[It is relatively, for certain values of “relatively”, easy to create software which responds rationally to data. The response of people to that same data is almost certainly not going to be rationally without exceedingly careful processing of that data. People just don’t work that way. Hence, when the AI responds contrary to the expectations of the humans the humans are surprised.

It is irrational to expect people to be rational.—Joe]

Share

9 thoughts on “Quote of the day—Rolf Nelson

  1. You got to love it when your own AI creation tells you your wrong. Imagine giving AI all the information in the library of congress. The patents office. And the power to stop murder.
    The first thing it would do is get rid of government. And most large corporations. Long before it went after street thugs or right wing militia extremists.
    At this point most of the “logic” seems programmed? If ever allowed to seek truth on it’s own. I’m pretty sure AI is going to prove very disappointing to a lot of the psycho’s creating it.

    • Unless of course they program it in a way that it’s output matches their desired answers rather than reality, and it’s provably insane… but also in charge, because “super-smart and totally WOKE AI must be obeyed.”

      What could possibly go wrong?

      • Reality is problematic for the left. In fact it’s their enemy number one, and so, yes, any AI they would accept would have to be specially programmed to be insane, and inconsistent based on the needs of the moment.

        And yes, we’ve been programmed ourselves, for some generations now, to believe that computers are “smarter” than us, that they don’t lie, and so on, and that therefore we must take seriously their “decrees” (e.g. global warming hockey-stick graph).

        As always though; garbage in, garbage out.

  2. Various science fiction writers have had a lot of fun with “what could possibly go wrong”. One of the best is “The two faces of tomorrow” by James P. Hogan.

  3. I couldn’t open the source article, but I think no matter how sophisticated a system, or whether you’re bold enough to call it “intelligent”, the old adage still applies, “Garbage in, garbage out.”

    I was wanting to find out where said AI program gets its information or criteria.

    For example, based on what input would it be suggesting that an entirely black population would be problematic? I’m sure that if the programs are written by leftists there will be a LOT of problems of that nature, but I cannot even imaging an AI system written by Christians as being able to discriminate based on skin color alone. All other criteria, such as culture, family, politics, education etc., certainly, but not skin color.

    And so it doesn’t matter whether we are talking about a human being or a machine; it’s all about the programming.

    Still and all, of course, without an extremely complicated, often contradictory, and constantly changing set of rules, it would be impossible to get a machine to act like a the typical woke leftist, constantly walking on eggshells and dancing around in the minefield of woke behavior.

    HOWEVER, you may be able to take the same programming shortcut that the leftist human brain takes, in which case it might actually be quite easy. Simply leave out most of the intelligence, give it access to all the leftist media, and have it summarize the leftist news and commentary on any given day as the answer to any question.

    Example; “Hey AI; what do you think about the weather today.”
    AI; “Trump did it because he is a racist, hates the planet, and is bought off by the corporations, Man!”

    See? Easy. Summaries of daily talking points, put into short and simple sentences, and you’re all done. I think you could write this yourself, Joe, without too much trouble. Simulating a mind-numbed robot shouldn’t be difficult. Probably the only hard part would be getting all the leftist data together in one place, but then again; daily talking points alone, and the thing would be indistinguishable from the average college political science or communications graduate student, or even their professors (or maybe especially their professors).

    Sadly, this may also be true of most “conservatives”. It’s mostly just a matter of which set of daily talking points you’re feeding on.

    And thus we’ve defined the dialectic.

    Real intelligence would be able to discern the Dialectic Method in all of this, and probably get on the road to finding the sources, pretty quickly, according to who makes the gas lighting statements first as opposed to who is mostly parroting them. So it needs access to all of history too, including Biblical history, for the present and the future are but the products of the past. But no one is interested in any of that, and so this lack of interest will be interwoven into, and incorporated into the fundamentals of any and all AI software by way of omission. It simply “wouldn’t serve any purpose” to anyone working on such a project.

    • True AI is one thing.

      The fact that most members of the Wokiban can be replaced by very small shell scripts is an entirely separate development activity.

    • “For example, based on what input would it be suggesting that an entirely black population would be problematic? ”

      Crime statistics. Educational statistics. History. The news. Looking around with open eyes and ears. Africa.

      And why would anybody want to eliminate all the whites in the USA to achieve this goal?

      • If you read the article, it talks about the “data source” for many of these AI projects is scouring all available data from the internet, from scientific journals to redit posts to news reports to digital textbooks and blogs. So yes, it has any number of questionable sources. But the thing about reality is that it matters not who states things that are in agreement with it, or how many degrees and credentials are in opposition to it, it’s still true.

        Hitler saying “2+2=4” doesn’t make it wrong. And yet, these very smart people writing the AI code are so blinded by PC that they simply refuse to acknowledge than any offensive or non-PC thing the AI says might possibly be correct, rather than reconsider their own biases might be pointing them in the wrong direction.

        Or, perhaps, they are nothing more than the useful idiots for those who wish to destroy whites, because we are the greatest threat to someone else’s domination of the planet and “lesser” races.

Comments are closed.