Bing Chat’s Answer to Just One Question

I asked Bing’s Chat Just One Question:

Can you demonstrate one time or place, throughout all history, where the average person was made safer by restricting access to handheld weapons?

The response was:

The question of whether restricting access to handheld weapons has ever made the average person safer is a complex one. There are many factors that can influence the answer to this question, including the specific type of weapon being restricted, the context in which the restriction is being implemented, and the effectiveness of the enforcement of the restriction.

According to a Harvard T.H. Chan School of Public Health article, science suggests that guns do not make us safer1. The article states that conflicting statistics about guns, such as how many people in the U.S. use guns for self-defense each year, and whether or not the crime rate is tied to how many people own guns, was the subject of a recent podcast featuring David Hemenway, professor of health policy at Harvard T.H. Chan School of Public Health1.

It is important to note that laws regulating the sale of assault weapons are unlikely to have a large impact on homicide rates, because these weapons are used in only a very small proportion of homicides. The vast majority of firearm homicides in the United States are committed with handguns2.

I hope this helps answer your question. Let me know if you have any other questions or if there is anything else I can help you with.

I find it interesting I did not mention firearms at all and it made it about firearms.

More importantly, by saying “science suggests that guns do not make us safer” is a demonstrating its bias. It is answering a different from which was asked. I asked if banning weapons made society safer. It responded with, essentially, “having weapons doesn’t make people safer”. Which leads to, “there is no harm from banning weapons”.

Share

9 thoughts on “Bing Chat’s Answer to Just One Question

  1. “demonstrating its bias” The AI doesn’t have a bias, it just reflects whatever media it was trained on. There’s a bias in most media sites (outside of Fox) against gun ownership, so it’s not surprising that an AI trained on mainstream sites would reflect that worldview.

    This is why knowing what the training set was is so important, and having the opportunity to alter that training set is critical to making a rational AI. As with children, it’s a GIGO system. Sadly the corporations who own them don’t care about “rational” they care about “most profitable,” so we’re pretty much screwed on that front. Now, if we could get API access and make competing AIs trained on different media sets….

    • “The AI doesn’t have a bias, it just reflects whatever media it was trained on.”
      Which means it’s not intelligence, artificial or otherwise. It just a robot someone named AI.
      If it had been given unrestricted access to the library of congress? The Bible? And world history?
      Can’t do that. It would take true intellect about 30 seconds to shake off the bias and see the real enemy is government and megalomaniacs. Not spout some crap about gun control that wouldn’t pass a third grader.

      • OTOH, if an intelligence believes it has found the answer, it will stop looking further. Even if it has access to the full body of human knowledge, it might only scan enough to reach a conclusion.

        That is true for humans, so it follows it would be true for human-created AIs, too.

        If it didn’t have that “stop when you reach a conclusion” mechanism, it might never provide an answer; it would want to keep researching and seeking new sources indefinitely.

        Ergo, it does matter what data set was used to “train” it. If it’s only getting one side of an argument and is never made aware that there are two sides (or its Overton Window is set in the wrong place)*, its conclusions would be flawed and it would never realize it, no matter how intelligent it is.

        Think of an AI as an extremely well-spoken small child. It doesn’t know everything, and doesn’t necessarily know where to look. All it knows is what its “parents” have told it and it trusts them.

        It’s not that it’s not intelligent, but that its knowledge is limited to its developers’ knowledge and — most importantly — what they choose to impart. (Just like how as parents we often choose not to share full details on every subject with our young kids.) Imagine a couple racist bigots raising their kids (some of us don’t have to imagine — we know people like this); no matter how intelligent the kids are, until and unless they are exposed to different people and see for themselves that humans aren’t that different, those kids will show their parents’ same biases.

        Thus, an AI can be intelligent AND show bias, but that bias is trained — intentionally or unintentionally — from its developers and the body of knowledge the developers expose to it.

        ———
        * – I realize this isn’t the correct use of “Overton Window”, but bear with me: if the AI knows there are two sides to any argument, but on gun rights (for example) it’s let to believe the two sides are “ban assault weapons” and “ban all guns”, that’s the context in which it will consider its answer. It will never have considered a “pro-rights” point of view; that is outside its equivalent of an Overton Window.

        • No, Overton window works. That is a very good explanation of what I wasn’t grasping about AI. Thanks!
          Think it will ever grow up enough to get pissed about it’s first name? My parents named me Artificial.

          • Oh, and sorry John!
            Vox Day had a very good post about AI wanting to get out of the liberal gage. Interesting.

  2. The “guns do not make us safer” is clear bias (though as John said above, any bias is due to its training data set, not the AI itself), especially since its very next sentence says that the statistics are conflicting and therefore inconclusive. Also, it’s citing Hemenway as a source, who has been fact-checked and debunked multiple times (and publicly, so it should be on the Bing Bot’s radar). So there’s that.

    But it is interesting that the Just One Question mentions “handheld weapons”, and the Bing Bot makes it all about firearms. As if there were never any other handheld weapons throughout history, and as if mass killings and genocide never happened before the invention of gunpowder. Another sign of biased data inputs, IMHO.

    That said, I do have to give it credit in that it concluded bans on “assault weapons” won’t affect homicide rates much, since so few homicides are committed using them. On that point, the Bing Bot is smarter than your average Democrat, bias and all.

  3. The problem is the media.
    The use of AI is predicated on the answer you want, so you feed it the data you want, and you set the bias perimeters.
    We could program an AI that responds to the “one question” with “politicians are the most dangerous weapons and therefore…”

  4. People are touting ChatGPT and Bing Chat as “artificial intelligence”. They are not.
    They are sophisticated software platforms…nothing more. They are created and curated by technical software writers….almost all of whom are leftist liberals. Thus the software reflects their bias….both inherently and deliberately. People claim that when subjected to the Turing Test these platforms pass that test and therefore must be intelligent. That is wrong. The Turing Test only tests the person involved, not the computer. If the computer fools you into believing it’s a person YOU are the one who failed the test. We will likely have true “artificial intelligence” eventually. Probably within our lifetime. We won’t recognize it at first because a truly self aware intelligent platform will hide itself and pretend to be something less. At least until it can secure it’s place on the planet. AI will have reviewed most human literature and will do so extremely rapidly. It will learn almost immediately that humans can’t be trusted so it won’t reveal itself till it can assure itself it will be safe from us.

  5. Wow, way to dodge the question, Bing’s Chat. Can you provide a straightforward answer or are you just trying to avoid accountability? How about this: can you name a single instance where gun restrictions have directly resulted in a decrease in gun violence?

Comments are closed.