Jump to content

Can an A.I. System Be Considered An Inventor?


Alex_Krycek

Recommended Posts

9 hours ago, Alex_Krycek said:

There's been a series of lawsuits recently

You are asking a legal question, not a metaphysical one. If some court rules a washing machine a person, then legally a washing machine is a person. Modern washing machines have chips and "make decisions." That's a lot different than asking if an "AI" in quotes, since there is no true AI, can be sentient or creative or can invent things. We already have programs that write other programs, for example compilers and 4GL languages. Doesn't mean anything. 

The program in question runs on conventional hardware and is in principle no different than any other conventional program. Machine learning and neural net approaches are a clever way to organize datamining, but all they can do is identify patterns based on what's already happened. By definition they can't create. 

But again, those are arguments for the metaphysical position that contemporary computers are not sentient/creative/self-aware/whatever. You are asking a legal question though, and the answer to that is, "Whatever a court decides is the law."

 

9 hours ago, iNow said:

I think of it a bit like a parent being eligible to control earnings of a child. In the same way, the creator is seeking eligibility to control earnings from the inventions of its AI. 

Are you suggesting that my hammer owns the chair I built with it? Such a view is nonsensical. Does your computer own the words you typed into this forum? A program doesn't invent anything, it just flips bits according to an algorithm. An algorithm written by a human. A computer program is a tool. 

Edited by wtf
Link to comment
Share on other sites

41 minutes ago, wtf said:

You are asking a legal question, not a metaphysical one.

I agree completely with what wtf says here. Tried highlighting the same in my very first replies which were brushed aside. 

41 minutes ago, wtf said:

Are you suggesting that my hammer owns the chair I built with it?

Lol. No, and besides, a hammer isn’t the correct tool for chair building. Maybe a saw and a spokeshave, or even a grinder?

41 minutes ago, wtf said:

Such a view is nonsensical.

Here yet again we agree. 

41 minutes ago, wtf said:

A program doesn't invent anything, it just flips bits according to an algorithm. An algorithm written by a human. A computer program is a tool. 

Here, however, we do not.

This view of modern AI is simplistic to the point of uselessness, especially when considered in context of machine learning where most of the algorithms and programs running didn’t come from human programmers.

However, let’s just ignore that for the sake of argument and accept your stance that the program is just flipping bits according to an algorithm… how is that any different from how the human mind works?

After, we’re just flipping sodium and potassium channels on a biological substrate. We’re just an algorithm running on a meat computer.

Your stance could equally apply to us, and that would be absurd. I’m suggesting it’s absurd also to apply it to modern (and rapidly developing) AI. 

Edited by iNow
Link to comment
Share on other sites

1 hour ago, wtf said:

You are asking a legal question, not a metaphysical one.

1.  The main philosophical question here is: can A.I. system be considered an inventor?  *which was the question posed to the courts*

Dismissing the question as "merely legal" and "whatever the courts say" is avoiding the issue.

Not every legal question is strictly just that - even a judge acknowledged the underlying philosophical implications in this case:

 "I need to grapple with the underlying idea, recognising the evolving nature of patentable inventions and their creators. We are both created and create. Why cannot our own creations also create?" - Justice J. Beach

2.   We are also discussing some related questions, such as the veracity of the DABUS system Thaler has created, and whether or not it lives up to his extraordinary claims.  

His claims are the catalyst for this debate, since he is indeed positing that the DABUS system is capable of conceptualizing new ideas on its own in an autonomous fashion, which would qualify it as a kind of sentient consciousness.  

 

Link to comment
Share on other sites

On 8/29/2021 at 5:33 PM, iNow said:

I agree completely with what wtf says here. Tried highlighting the same in my very first replies which were brushed aside. 

I am happy to find some agreement on an online forum. That's so rare! The original post was all about legal issues and patent decisions and so forth. It's perhaps unclear to some that courts and legal and political processes are not subject to philosophy or cognitive science or computer science or even rationality. The law is whatever some court says it is. If the OP wanted to ask, "Can a machine be sentient?" or some variation on that theme, that's what they should have asked. 

 

 

On 8/29/2021 at 5:33 PM, iNow said:

Lol. No, and besides, a hammer isn’t the correct tool for chair building. Maybe a saw and a spokeshave, or even a grinder?

Point being that tools are tools, and just because a washing machine has a chip does not mean that it's sentient. 

 

On 8/29/2021 at 5:33 PM, iNow said:

Here, however, we do not.

This view of modern AI is simplistic to the point of uselessness, especially when considered in context of machine learning where most of the algorithms and programs running didn’t come from human programmers.

Ok this is a common point made by the proponents of machine learning (ML) and neural network (NN) approaches to weak AI. Weak meaning specialized for a purpose, chess or driving or whatever, and not the mythical general  or human-like AI which we are no closer to now than we were in the 1960's when the AI hype train got started.

Let me make some brief points.

The algorithms don't come from human programmers? Well that's what 4GL languages have been doing since the 1970's. The classic example is relational databases. You ask the computer to "Show me the names of yellow fruits," and it outputs, "Lemons, bananas, papayas." You did not tell it how to traverse the database. In fact you gave it the "what" and the database system itself figured out the "how," based on optimizing the query. Programs that write algorithms are very common and nothing at all new or revolutionary. 

It's true that the NN systems assign weights to nodes, and get "trained" on sample data and so forth. I know how this works. But it's not true that the algorithms are not created by humans. In fact they are. We could in principle trace the algorithms using pencil and paper. In fact the ML students find out that no matter how gee-whiz the field first sounds, what they actually end up doing is learning to optimize matrix multiplications. ML and NN's are algorithms created by humans.

It may be true that "we can't easily tell what the program is doing." But in fact that's true of every nontrivial program anyone ever wrote. You write a fifty line program and it doesn't do what you think you told it to do, and you spend the rest of the day figuring out why. Or some maintenance programmer gets hired by a bank and has to dive into the hundreds of millions of lines of code that date back to the 1960's. You try to fix things and not break other things. Very few actual professional programmers have the faintest idea how the systems they maintain actually work. 

Most working programmers work with APIs and inside frameworks or application servers and have no idea even of the local operating system facilities. Let alone the system and hardware layers. The average working programmer has very little idea of how their programs are working, especially in the modern age. 

I could belabor this point but I hope you catch the sense of my response. The most advance "deep" neural net in the world is a conventional program running on conventional hardware. The nodes and paths are data structures, you train the network on sample data (the "corpus" as they call it), and it cranks out impressive-looking feats of weak AI. But it's in principle a physical implementation of a Turing machine. You could run the code with pencil and paper. It runs on standard, conventional hardware. And frankly a prosaic, everyday system like the global supply chain is more complicated than a chess engine. Nobody knows how the global supply chain works. It's used for grubby commerce so nobody puts it on a pedestal, but it's probably a billion or more lines of code distributed over thousands of individual enterprises. Nobody has any idea how it works. 

But to your main point: Even an optimizing compiler outputs algorithms that the human didn't write. It improves them, sometimes in creative ways. Programs have been writing other programs for a very long time. 

 

 

 

 

On 8/29/2021 at 5:33 PM, iNow said:

However, let’s just ignore that for the sake of argument and accept your stance that the program is just flipping bits according to an algorithm… how is that any different from how the human mind works?

We have no idea. We don't know how minds work. Show me the part of the brain responsible for your self-awareness and subjective sense impressions. You can't do that because nobody in the world can. And surely we don't have bits and registers. The mind does NOT work anything like a digital computer. Only in the popular AI-hyposphere is this regarded as even a remotely sensible debating point. There's just no evidence for the thesis. Brains don't flip bits.

And even if you push the neural network analogy, even that fails dramatically. NN's do pattern matching, they haven't the common sense or awareness of the world of a newborn human.

 

 

On 8/29/2021 at 5:33 PM, iNow said:

After, we’re just flipping sodium and potassium channels on a biological substrate. We’re just an algorithm running on a meat computer.

That's a claim for which there's no evidence. It's just not true. 

 

On 8/29/2021 at 5:33 PM, iNow said:

Your stance could equally apply to us, and that would be absurd. I’m suggesting it’s absurd also to apply it to modern (and rapidly developing) AI. 

It could, if evidence for your claims showed up someday. Since our best cognitive scientists haven't the foggiest idea how the mind works, you haven't the evidence to support your claim. Rapidly developing AI? Rapidly developing weak AI, yes. Playing chess and driving cars. Strong, or general AI? Hasn't made a lick of progress since the hype train got started in the 1960's. 

 

 

 

 

On 8/29/2021 at 6:24 PM, Alex_Krycek said:

1.  The main philosophical question here is: can A.I. system be considered an inventor?  *which was the question posed to the courts*

The question is about what some court might say. Here's one case that comes to mind. In 2017 Saudi Arabia granted citizenship to a robot. This decision was not made by philosophers or computer scientists or, frankly, any kind of deep thinkers. It was decided by politicians. If politicians pass a law decreeing that a washing machine with a chip is sentient. then it's legally sentient. It's not actually sentient, but in legal matters, it's the law that counts and not rationality. "The law is an ass," is an old saying.

If the original question was, "Can a machine be sentient?" that would be something we could discuss. But if the question is, "Can a court rule a machine is sentient?" then of course it can. Courts of law do all kinds of crazy things. 

 

 

On 8/29/2021 at 6:24 PM, Alex_Krycek said:

His claims are the catalyst for this debate, since he is indeed positing that the DABUS system is capable of conceptualizing new ideas on its own in an autonomous fashion, which would qualify it as a kind of sentient consciousness.  

 

Well I'll say no on general principles. Since we have no idea what makes sentient consciousness, it's doubtful we could create one. We don't know enough about the mind, by a long shot. Can a computer be capable of "conceptualizing new ideas on its own?" No. At best it can do clever datamining. 

A classic case was a sophisticated AI that learned to distinguish wolves from huskies, something that's difficult for humans. It had a terrific success rate. Then they realized that the training data always showed huskies in a snowy background. What the program was actually doing was recognizing snow.

https://innovation.uci.edu/2017/08/husky-or-wolf-using-a-black-box-learning-model-to-avoid-adoption-errors/

Another recent datapoint was when Facebook labelled the Declaration of Independence as hate speech.

https://www.cnn.com/2018/07/05/politics/facebook-post-hate-speech-delete-declaration-of-independence-mistake/index.html

With real life examples as those, you cannot make a case that programs are doing anything even remotely like what humans do in terms of creativity and awareness of the world. Weak AI systems do sophisticated pattern matching on their training data, nothing more.

You can easily find dozens if not hundreds of similar examples. Here's one, AI fooled by changing a single pixel.

https://www.bbc.com/news/technology-41845878

The hype around AI is really off the charts. It's easier to put into perspective if you know the history of the AI hype cycle. This article traces it back to the 1950's. True AI sentience has ALWAYS been "right around the corner."

https://www.kdnuggets.com/2018/02/birth-ai-first-hype-cycle.html

Regardless of whether the particular system you're asking about can be creative, I hope you can see that the inventor is SUING someone in a court of law. Even if they were to win their lawsuit, that would not make their machine creative. It would only mean they got some court to make a ruling, like the Indiana legislature that once (almost) decreed that the value of pi was 3.2. The only reason the bill didn't pass is that there just happened to be a mathematician present in the legislature that day. 

https://en.wikipedia.org/wiki/Indiana_Pi_Bill

Edited by wtf
Link to comment
Share on other sites

6 hours ago, wtf said:

The algorithms don't come from human programmers? Well that's what 4GL languages have been doing since the 1970's.

I believe the crux of any disagreement we may have here resides in the fact that you're thinking of computers in the 1970s, and I'm saying we've come a very long way in the 50 years since then such that these simplistic assertions are no longer quite as valid as you propose. 

 

6 hours ago, wtf said:

The most advance "deep" neural net in the world is a conventional program running on conventional hardware.

Rather like humans is the core of my counter-proposal. We're simply running similar programs on wet computers. In short, I am suggesting you're making a distinction without a difference. If a corporation can be human in our laws, then likely so too can AIs. There's precedent here.

 

6 hours ago, wtf said:

We have no idea. We don't know how minds work. Show me the part of the brain responsible for your self-awareness and subjective sense impressions.

I refer you here to please go review the insular cortex, the anterior cingulate cortex, and the medial prefrontal cortex. You're welcome. 

 

6 hours ago, wtf said:

And surely we don't have bits and registers. <...> Brains don't flip bits.

How else would you describe the way action potentials are triggered across the nervous system only after sufficient build-up of potassium and sodium ions? Once enough of those elements have come together in the plasma membrane within the hillock, a threshold is crossed and the axon depolarizes. I readily acknowledge there are many crucial differences between our minds and digital computers, but the lack of bit flipping and registration of electrical signals is not IMO one of them. 

 

6 hours ago, wtf said:

That's a claim for which there's no evidence. It's just not true. 

I suggested that we're just flipping sodium and potassium channels on a biological substrate. That has been very well understood and well evidenced for several decades now. Will you kindly please clarify which specific part of this you believe has "no evidence" and which is "just not true?"

Of all the claims I made, this one is perhaps the best supported, yet this is the one you've chosen to cast aside with the brush of a hand as blanketly untrue, and that rather confuses me. I welcome clarification of your dismissal. 

 

6 hours ago, wtf said:

Since our best cognitive scientists haven't the foggiest idea how the mind works

Here I'm going to go so far as to say you're arguing against a strawman. I don't believe it's intentional, but I do believe your comments suggest a deep ignorance of the last several decades of research into neurobiology and function, or at the very least an overconfidence in your own seemingly limited understanding of this space. 

 

6 hours ago, wtf said:

Strong, or general AI? Hasn't made a lick of progress since the hype train got started in the 1960's. 

Here yet again, I must dismiss this as quite obviously untrue to the point of being a strawman... it doesn't even raise to status of hyperbole. It's just false. It's fine for you to hold this opinion, but I believe your opinion is not representative of reality on this particular point. 

 

6 hours ago, wtf said:

I am happy to find some agreement on an online forum. That's so rare!

Agreed. Thanks for the cordial exchange and intelligent discussion thus far. It's always appreciated. Let's also remind ourselves to be cognizant of the OPs intention for this thread and hopefully we don't hijack it with this interesting aside.

Edited by iNow
Link to comment
Share on other sites

https://sicara.ai/blog/artificial-general-intelligence

I understand there is a rule here about no one has to click on a link.   For some subjects, where going through a quantity of literature is vital to any meaningful participation,  I don't quite see how such a rule can work.   It would be ridiculous for me to copy-paste everything in this article on AGI,  and create such a long post.   So,  erm, I guess I'll summarize and then suggest it for further reading?   Summary: we are far from AGI.  Progress has been made since the sixties,  but there is still a great distance to go.   

Since the law at least tries to be grounded in some reality,  it seems likely that we won't be assigning personhood to either washing machines or to AI that can't recognize the simplest objects out of some predefined context.   

 

Link to comment
Share on other sites

On 8/30/2021 at 12:51 AM, wtf said:

You are asking a legal question though, and the answer to that is, "Whatever a court decides is the law."

True, but it's not unreasonable to ask. How unsatisfactory would it be to a end a discussion about whether, or at what age, abortion should be legal, or the legal status of various drugs with 'whatever a court says'. 

 

15 minutes ago, TheVat said:

Since the law at least tries to be grounded in some reality,  it seems likely that we won't be assigning personhood to either washing machines or to AI that can't recognize the simplest objects out of some predefined context.   

AGI is irrelevant to this particular discussion.  DABUS is far from AGI yet its legal personhood is still being discussed, and apparently granted, in some courts. With the development of narrow AI like AlphaFold, which vastly improves previous attempts to model protein folding with implications in drug discovery, these legal discussions will likely (and rightly in my opinion) become more frequent regardless of AGI development. Given the glacial speed of politics and law, starting the discussion before it becomes a pressing matter seems prudent.

Regarding AGI, that blog was a bit vague. Here, about 350 ML researchers were surveyed regarding the estimated timeline of human level machine intelligence development. Granted, it might be something like fusion power (another 20 years right?), but it's the most thorough guess i've seen kicking around.

 

On 8/30/2021 at 2:24 AM, Alex_Krycek said:

We are also discussing some related questions, such as the veracity of the DABUS system Thaler has created, and whether or not it lives up to his extraordinary claims.

I found it incredibly hard to find out anything about the actual architecture of this model. This is the closest i could find - short on detail. I very much doubt it's anywhere near GPT-3, AlphaFold or Tesla's self-driving architecture.  Sounds more like a gimmick to start a discussion.

Link to comment
Share on other sites

15 hours ago, Prometheus said:

I found it incredibly hard to find out anything about the actual architecture of this model. This is the closest i could find - short on detail. I very much doubt it's anywhere near GPT-3, AlphaFold or Tesla's self-driving architecture.  Sounds more like a gimmick to start a discussion.

I'm a little bit more cynical than that.  When you're talking to investors, being able to tell them that your black box is "legally recognized as an autonomous inventor" is pretty persuasive.  It's a savvy business move, if nothing else.   

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.