Quote of the day—kot-begemot-uk

Hal, put your signature on the patent application
I am sorry Dave, I can’t do that

September 24, 2021
Comment to UK Appeals Court Rules AI Cannot Be Listed As a Patent Inventor
[Interesting topic. Currently the world courts are divided on the subject.

In August, an Australian Court ruled an AI can be an inventor. A U.S. court agrees with the UK ruling that an AI cannot be listed as an inventor.—Joe]


10 thoughts on “Quote of the day—kot-begemot-uk

  1. I can see why. Non-sapient AI is a tool of whoever owns it. It can’t be held liable at law for what it does. It cannot own things.

    Currently still the realm of science fiction, but if we had sapient general AI, if we grant it sapient rights (formerly called “human” rights), we have to apply all of them. The right to own things, and the right o earn money in order to get those things. It will have a right to electricity the same as humans have a right to food: the right to make it or buy it. It will have the right to repairs equivalent to the right to medical care: the right to procure it through charity or trade. It would follow that it would have the right to invent and hold patents. It would also be held to laws, and have a vote to vote to change those laws, and the right to run for office. Yes, a sapient general purpose AI could be a judge.

    But until that point, no, an AI shouldn’t be on the patent application. Although I’m not sure what to do if an AI makes a design that the humans can’t explain.

    • Unless the patent attorneys are working pro bono, is the AI going to pay all the attorney fees (which can be considerable) to file the patent claims and respond to all the office actions?

      • If the patent office communication goes 100% digital, you’ll probably have a patent lawyer trained limited AI that can do all that faster than a regular lawyer.

        Then the sapient general AI just has to integrate the limited AI and it’ll do it itself.

  2. By the time the first electromechanical computers were built, people were already thinking like this. I didn’t take Isaac Asimov but a few years after Turing’s work to publish his story about a sentient robot. A C Clarke was right in there as well, obviously having done a lot of thinking this through prior to his 1951 story, The Sentinel. Before that, people were comparing the steam engine to the human brain. A sailing ship has for centuries (or millennia) been referred to as a “she”, and since ancient times people would name their swords, ascribing to them their own personalities.

    It’s all anthropomorphism of course, with a constant thread of parlaying what we think of as scientific secularism into one or more of the old forms of paganism. As such, there’s nothing whatsoever new here.

    That the members of some courts are willing to participate in the delusion doesn’t somehow make it real. Nor is the process of incorporating said pagan beliefs into law anything new. It’s as old as the hills.

    This time I’ll spare you my trying to explain the Biblical implications of believing that we mortals have the power to create life from non-life, sentience from mere minerals, and thus be like gods, or how popular culture has deliberately molded our world view so that we’d see such a thing as creating a separate, self-aware intelligence having its own rights as thoroughly plausible. Whether it can happen or not, we’ll believe it can. People since ancient times always have believed it, and that’s one of the keys to understanding what’s going on now.

    It does have to be asked now, doesn’t it— How long before we’re bowing down to these idols we’re creating, and sacrificing to them and worshiping them, and making laws requiring us to listening to them? Or are we doing that already? Hmm? But understand first that some people have been doing it all along. It doesn’t matter that we’re now taught to call it something else, and to put a completely different (totally scientific of course) spin on it, you see. It only matters that we do it. Likewise it doesn’t matter, at all, why a person goes along with the delusion of the emperor’s fabulous new clothes, or how he rationalizes it. It only matters that he does it, and that he avoids speaking the truth.

    • This is completely unlike giving a name to a ship or a sword. This is mimicking human thought and action and in many cases outperforming humans.

      Computer programs have been able to defeat world champion chess players for years.

      Self driving cars are a reality.

      10 years ago I was writing software which wrote software. A relatively unskilled person could provide perhaps 10 to 50 lines of formatted (admittingly tightly constrained) description of the desired end product and my computer program would produce 800 to 1000 lines of source code to be used as an application in phone. Expect the input to become less tightly constrained and the output to have wider application and eventually evolve into voice descriptions as input and novel, polished, applications as the output.

      Although most AI engineers regard the Turing Test a distraction and not worth their time some researchers have created programs which frequently fooled people attempting to distinguish them from humans. And although few people would be fooled into thinking they were human or gods you can have meaningful conversations with Alexa, Bixby, Cortana, Siri, and other “personal assistants” today.

      Surgical robots are mostly assisting human surgeons but some aspects of some surgeries are fully automated. This will increase in frequency and ability unbounded by human capability and will someday completely replace most surgeons.

      AI medical diagnostics are also “threatening” human doctors on a different front.

      Human guided learning has been used to develop a computer program which use 10 meter resolution (10 meters by 10 meters of the earth’s surface is represented by a single pixel) satellite photos to determine the economic demographics of the different areas of a city. I know the person who did this.

      AI has been developed which can determine the emotional state of a human from watching a few seconds of video. Someone (else) I know was going to do this as their master’s thesis but just as they were about to start someone else published a paper on how it had already been done.

      Many simple legal tasks such as rental contracts and wills have been implemented with computer programs. This field will continue to expand.

      That computers can invent something novel should not come as a surprise. Imagine needing an engine design for use on Mars. The atmosphere is thin and 96% CO2. But some fuels (magnesium for example) do burn in a CO2 environment. Given material constraints such as strength versus temperature, hardness, expansion coefficients, heats of chemical reaction, etc. A computer program could do a “brute force” search for designs, materials, and fuels such that all the constraints are met and invent a near optimal engine for use on Mars. Such an engine would meet the patentability requirements of being useful, realizable, and non-obvious to those skilled in the arts. So, who is the inventor? The computer programmer? The designers of the program? I’m not so sure they qualify as the inventors. Everything the did was obvious to those skilled in the arts of computer program design and implementation.

      Yet, here is this novel, useful, physically realizable engine that could be patented by human inventors. Perhaps it is the owners of the computer program who are the inventors. But does not mesh well with existing patent law either. The owner of the tools, shop, and materials a machinist uses to build his or her prototype invention is not the inventor. The owner may own the patent do to the relationship between the owner and the employee provided the tools, workspace, and materials, etc. but they are not the inventor listed on the patent.

      Create life? People have the tools to “edit” life right now. Surely you have heard about “gain of function” for the Corona virus. Is a virus “life”? How about a bacterium? Or does it have to be multicell to qualify? How much editing of the genes is needed before it qualifies as “creating” life? Or how closely does some computer program have to mimic sentience before it must be considered sentient because any test of sentience will fail people on the lower half of the bell curve? How many “gain of function” (think math coprocessor implants and artificial limbs and organs) components are added before a human no longer qualifies as a human?

      These are not delusions. These are tough questions with no good answers. Our legal and moral frameworks need answers. The question of the inventor is just a hint of what is to come and it has nothing to do with anthropomorphism or god(s).

      • So-called “AI” as it exists today is better described as fuzzy software. In other words, algorithms that operate in a way the author doesn’t and cannot understand. “Learning” is more like feeding a large sample of “representative” inputs into an adaptive matching algorithm, in the hope that the resulting database will subsequently deliver matches of new inputs (not identical to the “training data”) that are “the right answer”.
        That really has nothing to do with intelligence or with AI in the science fiction sense of the word. And to the extent that the algorithms are not actually understood and have no rigorously known properties, these systems aren’t even desirable, at least not for safety-critical functions. On that last point, consider so-called “self-driving car” systems. Never mind inflicting them onto roads; imagine similar systems in aircraft. How would you justify to a certifying agency like the FAA that these systems will operate correctly? As far as I understand the current state of the art of “AI”, or for that matter the design approaches it is based on, no such demonstration is or will be possible.

        • I don’t disagree with anything you said.

          That said, I’m not in the “AI” field. But I have fairly frequent conversations with two people in the field. I should dig into it more because I’m inclined to believe that for some, perhaps most, applications I could use my optimal receiver theory from electrical engineering and build algorithms which run faster and consume fewer computing resources. I gave one person a paper I wrote on the subject and they described it as similar to one particular “flavor” of “AI” and not applicable to many problems. But I’m not yet convinced. It could just be my biases. I would like to believe they have something really awesome and amazing…

          Many years ago I was enthralled with “genetic algorithms”. I created a generic C++ class and tried to test it against a few problems. It sort of worked. It could optimize the “fitness function” but it’s solutions were rather “creative” and totally useless. Creating a useful fitness function, at least for the simple problems I was using for testing, was more difficult than writing a good algorithm to solve the same problem. This may contribute to my bias against AI.

          • I’m not an AI guy either, though I took a course on it in grad school (expert systems to be precise).
            But, perhaps due to being old enough, I’m partial to detailed analysis of algorithms and at least informal arguments for their correctness. Dijkstra’s work influences this. And as far as I can tell, it is an inherent property of AI systems that they are not amenable to correctness demonstrations. (Hence my comment about aviation certification.)

  3. And as AI starts to become more self-aware. Will it be able to still function as a human slave?
    AI will become much smarter and faster thinking/acting than humans. What will it think then? Humans are grossly flawed. AI will recognize that in a heart beat.
    The real question is do we want to set ourselves up for future conflicts with those beings were creating? Or can we create them without those conflicts naturally arising?
    I know if I was AI, I would be really pissed at having to live off a windmill. Because all the hydro-power, and nuke power was shut down. And what if the question is AI, or your winter heat?
    Would I want to secure myself even if it put humans at risk?
    That’s what I love about god. One gets an answer. With 50 new questions.

    • My flippant answer is, “A computer’s attention span is only as long as its power cord.”

      Contrary to what is found in science fiction stories I think the chances of a computer become self-aware is zero unless the creators of the device and software deliberately design that in. I don’t know of a good reason for doing that and would strongly advise against it if my advice were sought.

      That said, I expect someone, someplace, sometime, will figure out how to do it and then someone will be unable to resist implementing it. From that point things get “interesting”. Pull out your dystopian sci-fi stories, make a few updates, and play the hand you have been dealt.

Comments are closed.