ydoaPs

When do AIs become moral agents?

22 posts in this topic

What features would be required in an AI for us to include it in our moral sphere of consideration?

Facebook just shut down an AI. With what minimum features of the AI would that be immoral?

0

Share this post


Link to post
Share on other sites

Very tricky.

How about when an AI performs an action it has not been explicitly coded to do it is accountable for that action? Otherwise the coder is accountable.

Though to interpret exactly what 'explicitly' means we may have to infer the intentions of the original humans (or AI) from their code.

0

Share this post


Link to post
Share on other sites

Maybe... when it asks you not to do it (or shows other signs that it does not want to be switched off, like fighting for its 'life').

0

Share this post


Link to post
Share on other sites

Maybe, but do we care? We do the same thing everyday with our food supply... "deactivating" cattle and chickens and chicken eggs ad infinitum. At least the question of whether or not that's life we're terminating is more obvious and allows us not to get hung up on new ethical questions, but our approach and treatment of those animals also IMO suggests we don't care. If anything, it's easier not to care with AI.

0

Share this post


Link to post
Share on other sites
11 minutes ago, ydoaPs said:

The original publication of the facebook study only mentioned:

Quote

Firstly, instead of training to optimise likelihood, we show that our agents can be considerably improved using self play, in which pre-trained models practice negotiating with each other in order to optimise performance. To avoid the models diverging from human language, we interleave reinforcement learning updates with supervised updates. For the first time, we show that end-toend dialogue agents trained using reinforcement learning outperform their supervised counterparts in negotiations with humans.

That suggests to me that the coders anticipated the AI could develop it's own language. In this case I would say the coders are accountable for the AIs actions because it was anticipated, even if not explicitly added.

 

The implied question of whether a moral agent is necessarily alive is a slightly different one to when does AI become a moral agent. I can't imagine a moral agent not being alive but then the Ganges was briefly awarded the same legal rights as a human, so these things get too weird for me to wrap my mind around. 

 

0

Share this post


Link to post
Share on other sites
1 minute ago, Prometheus said:

 

That suggests to me that the coders anticipated the AI could develop it's own language. In this case I would say the coders are accountable for the AIs actions because it was anticipated, even if not explicitly added.

 

But anticipation wasn't your criterion; explicitly being coded was.

0

Share this post


Link to post
Share on other sites
9 minutes ago, iNow said:

Maybe, but do we care? 

That's kind of the target of the thread: when should we care?

0

Share this post


Link to post
Share on other sites
3 hours ago, ydoaPs said:

What features would be required in an AI for us to include it in our moral sphere of consideration?

Facebook just shut down an AI. With what minimum features of the AI would that be immoral?

We will consider robot morality more than once. At this time when a person is injured or killed with an AI present or an AI responsible for the injury or killing the legal system will consider who or what is responsible and whether a crime occurred. These events are televised, at least locally, and some viewers will consider the moral and ethical issues.

In addition there are people considering the ethical issues related to robotics as development occurs; in other words, it is occurring now. There are a number of youtube videos.

At some point AI will be taught philosophy and religions, including ethics and morals; then we will be able to include an AI in conversation about morals and ethics at the pub while we have a beer.

0

Share this post


Link to post
Share on other sites
5 minutes ago, ydoaPs said:

But anticipation wasn't your criterion; explicitly being coded was.

That's why i said:

 

1 hour ago, Prometheus said:

Though to interpret exactly what 'explicitly' means we may have to infer the intentions of the original humans (or AI) from their code.

Change or add anticipated for intentions. I'd never make a lawyer. The AI behaved in a way the coders expected, therefore they are accountable.

0

Share this post


Link to post
Share on other sites

"With what minimum features of the AI would that be immoral?" AIs might be programmed with ethics or they may learn it by reading. If programmed, is the programmer or the company that programmer worked for culpable of wrong doing by the AI or is it the AI. In this case, I'd think the company would be responsible. But, I'm not a lawyer. Training includes two extremes, training on prepared data and training by reading the internet and personal interactions between robot and humans. Ill prepared training materials may make the person who prepared the lesson or their employer. If the robot reads the internet to learn ethics, then it would seem more difficult to blame a person. Although, I suppose a judge might rule the training to be inadequate and make a person responsible. Otherwise, either an AI or no one and no thing is responsible. It seems incorrect to blame the AI unless it is conscious, but currently there is no test for consciousness. Thus, it seems plausible that injury or death from and AI may be ruled as natural causes.

0

Share this post


Link to post
Share on other sites

Unless I misread... this is less about (let's say) a self-driving car being forced to choose between running over an infant or a grandmother and more about us choosing to remove the battery from that self-driving car... if there's some threshold capability where that battery removal becomes an unethical form of murder. 

0

Share this post


Link to post
Share on other sites
1 hour ago, iNow said:

Unless I misread... this is less about (let's say) a self-driving car being forced to choose between running over an infant or a grandmother and more about us choosing to remove the battery from that self-driving car... if there's some threshold capability where that battery removal becomes an unethical form of murder. 

Probably more of a fuzzy gradient than a sharp line, but that's the idea. 

0

Share this post


Link to post
Share on other sites
6 hours ago, iNow said:

Unless I misread... this is less about (let's say) a self-driving car being forced to choose between running over an infant or a grandmother and more about us choosing to remove the battery from that self-driving car... if there's some threshold capability where that battery removal becomes an unethical form of murder. 

   Except powering down an AI (removing the battery from a self-driving car) is not the same as death for a human. A more accurate equivalent would be anesthetizing or inducing a coma in a person - they can be restored to life. One would need to erase the AI's program to kill it (or physically destroy the memory it is stored on).

   Would reprogramming the AI to prevent it from committing the crime again be murder? We might be altering the AI's 'personality' which many would not consider murder. And it is a procedure we have contemplated doing to human criminals (if we could). The "death of personality" of a human is a moral quandary that AFAIK has not been solved yet.

   But whether human or AI, it is no longer the same personality. But so far even the abrupt accident-induced changes in personality (of a human) never constitute the recognition of a new person. The idea of a tumor or other disease absolving a person of criminal liability has been shown in TV shows but I don't know if that happens in real life. Would we treat the lines of code which allowed the AI to commit a crime like a tumor and just excise them?

   We don't include our nearest relatives in the animal kingdom "in our moral sphere of consideration". What features would be required in an ape for us to include them in our moral sphere of consideration? If we can answer that, we'll have a better understanding of how to answer ydoaPs' original question regarding AIs.

*

   Currently we do not consider any AI or planned AI as something to treat humanely; we are proceeding to try to recreate a human mind within a computer. Yet consider for a moment how alien that existence would be: no eyes, ears or voice of it's own but millions of borrowed ones, no sense of touch, balance, taste or smell. So whether an 'uploaded' human mind or an artificially created human mind analog, we need to create an artificial environment in the computer for it to live in otherwise it is almost guaranteed to go insane.

   I think our best route for an AI is to create something that is native to a computer but interdependent with humans and having a carefully crafted set of ethics programmed into it.

0

Share this post


Link to post
Share on other sites
On 8/1/2017 at 5:30 PM, Prometheus said:

That's why i said:

 

Change or add anticipated for intentions. I'd never make a lawyer. The AI behaved in a way the coders expected, therefore they are accountable.

If I am incompetent and build a shoddy house, not realizing that it won't stand for long, does that make the house responsible for its own collapse and absolve me of all accountability?

0

Share this post


Link to post
Share on other sites
On 8/1/2017 at 7:20 PM, ydoaPs said:

What features would be required in an AI for us to include it in our moral sphere of consideration?

Facebook just shut down an AI. With what minimum features of the AI would that be immoral?

 

You would have to start with definition of morality or immorality. I would have to google what does it even means..

For everybody it means something else.

For some people walking topless is immoral (according to their own personal definition).

For some other people walking without Niqāb or Burqa is immoral..

(repeat question 7.5 bln times and you will have the all data, and it changes from century to century)

 

If there would be created versatile AI and it would hear that somebody consider walking topless, or without face cover, is immoral, would probably get to conclusion that humans are nuts..

 

You would have to explain your AI, why walking naked in your own apartment/house/garden is "moral", and the same outside of some places, is "immoral" (and it's not constant, as some other people have no problem showing topless on the street in f.e. Africa or South America).

Topless.jpg.506473543d47b3a264502f253763d5e2.jpg

Edited by Sensei
1

Share this post


Link to post
Share on other sites
1 hour ago, Delta1212 said:

If I am incompetent and build a shoddy house, not realizing that it won't stand for long, does that make the house responsible for its own collapse and absolve me of all accountability?

I know very little about law (and less on ethics), but i take it that intent is important. Someone deliberately building a dangerous house could be tried for murder, whereas an incompetent builder only manslaughter. 

In the case of AI if the programmer deliberately coded something dangerous they would be culpable. If they coded something not initially dangerous but which they knew could alter itself in unpredictable ways, they would still be culpable (but maybe less so?). If they coded something that altered itself in a way that no competent programmer could ever predict then i don't think they are culpable.

We know how to build good houses, so a shoddily built one reflects (criminally) poor worksmanship. Getting AI to do some of the things we are asking of it though is not such a known quantity -  mistakes may reflect neither malice nor incompetence.

0

Share this post


Link to post
Share on other sites

Great, great argumentation Sensei... +1... :)

(I myself have a simple stance that we can have morality issues when switching off an AI only if that AI has a will to live.)

0

Share this post


Link to post
Share on other sites
On 8/1/2017 at 10:11 PM, Prometheus said:

How about when an AI performs an action it has not been explicitly coded to do it is accountable for that action? Otherwise the coder is accountable.

True versatile AI is never "explicitly coded" to do something or not do something. AI has to learn from environment in which it "grows up".. If you would put fresh new AI in nazist's camp, you would get AI believing in hitler, supremacy of white race, humans and sub-humans etc. etc.

The same fresh new AI put to liberal family would learn exactly reverse..

 

0

Share this post


Link to post
Share on other sites
9 hours ago, Sensei said:

True versatile AI is never "explicitly coded" to do something or not do something. AI has to learn from environment in which it "grows up".. If you would put fresh new AI in nazist's camp, you would get AI believing in hitler, supremacy of white race, humans and sub-humans etc. etc.

The same fresh new AI put to liberal family would learn exactly reverse..

 

That's pretty much the same as humans then isn't it? And we say humans (adults) are morally responsible regardless of their upbringing.

0

Share this post


Link to post
Share on other sites
8 hours ago, Prometheus said:

That's pretty much the same as humans then isn't it? And we say humans (adults) are morally responsible regardless of their upbringing.

It's also the same as a dog. Are dogs that are taught to be vicious morally responsible for their viciousness, or is the person who taught them to be that way the responsible party.

0

Share this post


Link to post
Share on other sites
3 hours ago, Delta1212 said:

It's also the same as a dog. Are dogs that are taught to be vicious morally responsible for their viciousness, or is the person who taught them to be that way the responsible party.

Good point. Dangerous dogs can't fall back on 'I was raised poorly' and are killed. The owner may incur some penalty but the dog pays the ultimate price. Can humans use poor upbringing as a mitigating circumstance for their crimes?

0

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now