Jump to content

When do AIs become moral agents?


ydoaPs

Recommended Posts

Very tricky.

How about when an AI performs an action it has not been explicitly coded to do it is accountable for that action? Otherwise the coder is accountable.

Though to interpret exactly what 'explicitly' means we may have to infer the intentions of the original humans (or AI) from their code.

Link to comment
Share on other sites

Maybe, but do we care? We do the same thing everyday with our food supply... "deactivating" cattle and chickens and chicken eggs ad infinitum. At least the question of whether or not that's life we're terminating is more obvious and allows us not to get hung up on new ethical questions, but our approach and treatment of those animals also IMO suggests we don't care. If anything, it's easier not to care with AI.

Link to comment
Share on other sites

11 minutes ago, ydoaPs said:

The original publication of the facebook study only mentioned:

Quote

Firstly, instead of training to optimise likelihood, we show that our agents can be considerably improved using self play, in which pre-trained models practice negotiating with each other in order to optimise performance. To avoid the models diverging from human language, we interleave reinforcement learning updates with supervised updates. For the first time, we show that end-toend dialogue agents trained using reinforcement learning outperform their supervised counterparts in negotiations with humans.

That suggests to me that the coders anticipated the AI could develop it's own language. In this case I would say the coders are accountable for the AIs actions because it was anticipated, even if not explicitly added.

 

The implied question of whether a moral agent is necessarily alive is a slightly different one to when does AI become a moral agent. I can't imagine a moral agent not being alive but then the Ganges was briefly awarded the same legal rights as a human, so these things get too weird for me to wrap my mind around. 

 

Link to comment
Share on other sites

1 minute ago, Prometheus said:

 

That suggests to me that the coders anticipated the AI could develop it's own language. In this case I would say the coders are accountable for the AIs actions because it was anticipated, even if not explicitly added.

 

But anticipation wasn't your criterion; explicitly being coded was.

Link to comment
Share on other sites

3 hours ago, ydoaPs said:

What features would be required in an AI for us to include it in our moral sphere of consideration?

Facebook just shut down an AI. With what minimum features of the AI would that be immoral?

We will consider robot morality more than once. At this time when a person is injured or killed with an AI present or an AI responsible for the injury or killing the legal system will consider who or what is responsible and whether a crime occurred. These events are televised, at least locally, and some viewers will consider the moral and ethical issues.

In addition there are people considering the ethical issues related to robotics as development occurs; in other words, it is occurring now. There are a number of youtube videos.

At some point AI will be taught philosophy and religions, including ethics and morals; then we will be able to include an AI in conversation about morals and ethics at the pub while we have a beer.

Link to comment
Share on other sites

5 minutes ago, ydoaPs said:

But anticipation wasn't your criterion; explicitly being coded was.

That's why i said:

 

1 hour ago, Prometheus said:

Though to interpret exactly what 'explicitly' means we may have to infer the intentions of the original humans (or AI) from their code.

Change or add anticipated for intentions. I'd never make a lawyer. The AI behaved in a way the coders expected, therefore they are accountable.

Link to comment
Share on other sites

"With what minimum features of the AI would that be immoral?" AIs might be programmed with ethics or they may learn it by reading. If programmed, is the programmer or the company that programmer worked for culpable of wrong doing by the AI or is it the AI. In this case, I'd think the company would be responsible. But, I'm not a lawyer. Training includes two extremes, training on prepared data and training by reading the internet and personal interactions between robot and humans. Ill prepared training materials may make the person who prepared the lesson or their employer. If the robot reads the internet to learn ethics, then it would seem more difficult to blame a person. Although, I suppose a judge might rule the training to be inadequate and make a person responsible. Otherwise, either an AI or no one and no thing is responsible. It seems incorrect to blame the AI unless it is conscious, but currently there is no test for consciousness. Thus, it seems plausible that injury or death from and AI may be ruled as natural causes.

Link to comment
Share on other sites

Unless I misread... this is less about (let's say) a self-driving car being forced to choose between running over an infant or a grandmother and more about us choosing to remove the battery from that self-driving car... if there's some threshold capability where that battery removal becomes an unethical form of murder. 

Link to comment
Share on other sites

1 hour ago, iNow said:

Unless I misread... this is less about (let's say) a self-driving car being forced to choose between running over an infant or a grandmother and more about us choosing to remove the battery from that self-driving car... if there's some threshold capability where that battery removal becomes an unethical form of murder. 

Probably more of a fuzzy gradient than a sharp line, but that's the idea. 

Link to comment
Share on other sites

6 hours ago, iNow said:

Unless I misread... this is less about (let's say) a self-driving car being forced to choose between running over an infant or a grandmother and more about us choosing to remove the battery from that self-driving car... if there's some threshold capability where that battery removal becomes an unethical form of murder. 

   Except powering down an AI (removing the battery from a self-driving car) is not the same as death for a human. A more accurate equivalent would be anesthetizing or inducing a coma in a person - they can be restored to life. One would need to erase the AI's program to kill it (or physically destroy the memory it is stored on).

   Would reprogramming the AI to prevent it from committing the crime again be murder? We might be altering the AI's 'personality' which many would not consider murder. And it is a procedure we have contemplated doing to human criminals (if we could). The "death of personality" of a human is a moral quandary that AFAIK has not been solved yet.

   But whether human or AI, it is no longer the same personality. But so far even the abrupt accident-induced changes in personality (of a human) never constitute the recognition of a new person. The idea of a tumor or other disease absolving a person of criminal liability has been shown in TV shows but I don't know if that happens in real life. Would we treat the lines of code which allowed the AI to commit a crime like a tumor and just excise them?

   We don't include our nearest relatives in the animal kingdom "in our moral sphere of consideration". What features would be required in an ape for us to include them in our moral sphere of consideration? If we can answer that, we'll have a better understanding of how to answer ydoaPs' original question regarding AIs.

*

   Currently we do not consider any AI or planned AI as something to treat humanely; we are proceeding to try to recreate a human mind within a computer. Yet consider for a moment how alien that existence would be: no eyes, ears or voice of it's own but millions of borrowed ones, no sense of touch, balance, taste or smell. So whether an 'uploaded' human mind or an artificially created human mind analog, we need to create an artificial environment in the computer for it to live in otherwise it is almost guaranteed to go insane.

   I think our best route for an AI is to create something that is native to a computer but interdependent with humans and having a carefully crafted set of ethics programmed into it.

Link to comment
Share on other sites

  • 3 weeks later...
On 8/1/2017 at 5:30 PM, Prometheus said:

That's why i said:

 

Change or add anticipated for intentions. I'd never make a lawyer. The AI behaved in a way the coders expected, therefore they are accountable.

If I am incompetent and build a shoddy house, not realizing that it won't stand for long, does that make the house responsible for its own collapse and absolve me of all accountability?

Link to comment
Share on other sites

On 8/1/2017 at 7:20 PM, ydoaPs said:

What features would be required in an AI for us to include it in our moral sphere of consideration?

Facebook just shut down an AI. With what minimum features of the AI would that be immoral?

 

You would have to start with definition of morality or immorality. I would have to google what does it even means..

For everybody it means something else.

For some people walking topless is immoral (according to their own personal definition).

For some other people walking without Niqāb or Burqa is immoral..

(repeat question 7.5 bln times and you will have the all data, and it changes from century to century)

 

If there would be created versatile AI and it would hear that somebody consider walking topless, or without face cover, is immoral, would probably get to conclusion that humans are nuts..

 

You would have to explain your AI, why walking naked in your own apartment/house/garden is "moral", and the same outside of some places, is "immoral" (and it's not constant, as some other people have no problem showing topless on the street in f.e. Africa or South America).

Topless.jpg.506473543d47b3a264502f253763d5e2.jpg

Edited by Sensei
Link to comment
Share on other sites

1 hour ago, Delta1212 said:

If I am incompetent and build a shoddy house, not realizing that it won't stand for long, does that make the house responsible for its own collapse and absolve me of all accountability?

I know very little about law (and less on ethics), but i take it that intent is important. Someone deliberately building a dangerous house could be tried for murder, whereas an incompetent builder only manslaughter. 

In the case of AI if the programmer deliberately coded something dangerous they would be culpable. If they coded something not initially dangerous but which they knew could alter itself in unpredictable ways, they would still be culpable (but maybe less so?). If they coded something that altered itself in a way that no competent programmer could ever predict then i don't think they are culpable.

We know how to build good houses, so a shoddily built one reflects (criminally) poor worksmanship. Getting AI to do some of the things we are asking of it though is not such a known quantity -  mistakes may reflect neither malice nor incompetence.

Link to comment
Share on other sites

On 8/1/2017 at 10:11 PM, Prometheus said:

How about when an AI performs an action it has not been explicitly coded to do it is accountable for that action? Otherwise the coder is accountable.

True versatile AI is never "explicitly coded" to do something or not do something. AI has to learn from environment in which it "grows up".. If you would put fresh new AI in nazist's camp, you would get AI believing in hitler, supremacy of white race, humans and sub-humans etc. etc.

The same fresh new AI put to liberal family would learn exactly reverse..

 

Link to comment
Share on other sites

9 hours ago, Sensei said:

True versatile AI is never "explicitly coded" to do something or not do something. AI has to learn from environment in which it "grows up".. If you would put fresh new AI in nazist's camp, you would get AI believing in hitler, supremacy of white race, humans and sub-humans etc. etc.

The same fresh new AI put to liberal family would learn exactly reverse..

 

That's pretty much the same as humans then isn't it? And we say humans (adults) are morally responsible regardless of their upbringing.

Link to comment
Share on other sites

8 hours ago, Prometheus said:

That's pretty much the same as humans then isn't it? And we say humans (adults) are morally responsible regardless of their upbringing.

It's also the same as a dog. Are dogs that are taught to be vicious morally responsible for their viciousness, or is the person who taught them to be that way the responsible party.

Link to comment
Share on other sites

3 hours ago, Delta1212 said:

It's also the same as a dog. Are dogs that are taught to be vicious morally responsible for their viciousness, or is the person who taught them to be that way the responsible party.

Good point. Dangerous dogs can't fall back on 'I was raised poorly' and are killed. The owner may incur some penalty but the dog pays the ultimate price. Can humans use poor upbringing as a mitigating circumstance for their crimes?

Link to comment
Share on other sites

  • 2 weeks later...
On 8/19/2017 at 3:15 PM, Prometheus said:

Good point. Dangerous dogs can't fall back on 'I was raised poorly' and are killed. The owner may incur some penalty but the dog pays the ultimate price. Can humans use poor upbringing as a mitigating circumstance for their crimes?

   If they have a skilled lawyer they do. Also, biology is sometimes used as an excuse. It has become an issue of money - for the most skilled lawyer the defendant can afford. Unless it has already amassed a fortune to buy the services of a very skilled lawyer, an artificial sapience (AS) might take the time to learn the legal system and help craft its own defense depending upon how intelligent the AS is and how basic the lawyer is.

   An AS will likely find humanity to be unethical even if most people are moral - utilizing the subtext that morals are based on belief (usually religion) while ethics are based on logic and reason. This is a real even if not commonly discussed distinction; religious people declaring atheists to be immoral, while business and medical ethics (not morals) are a concern. Unless the AS is designed to be very human-socialized,  I expect it will behave in what most people would consider a haughty manner: not deferring to people who believe themselves to be correct, even telling people they are wrong and delusional when they say something other humans would let slide.

   Just to make clear what I mean by "human-socialized", consider dogs and chimpanzees. Chimpanzees are demonstrably more intelligent than dogs while dogs are demonstrably more human-socialized than chimpanzees - they can recognize what a person pointing means, they respond to our vocal tones and human moods, etc. Dogs might be pets but they are pampered and bequeathed inheritances. We also have more legal protections for dogs than chimpanzees AFAIK.

   I've been in more than one discussion about the droids from Star Wars: how they seem sapient yet have no rights. But thinking about them again, now, I can see that they are programmed to be very 'human'-socialized but almost none of them have any real independent thought or creativity – traits I deem essential to consider a being as sapient. R2-D2 seems to have the most thought and independence of all the droids while the rest are a qualitative step below it from what I've observed.

   I suspect that the first AS to receive legal protections will be one programmed to be human-socialized. We should start crafting a legal framework – protections, rights, responsibilities, etc – for AS before we create the sapient robots and programs. But like all technology it will likely only be decided after something happens.

   We need to be very careful when creating a versatile AI and assume they will become an AS on their own because if we don't we could create something dangerous. Not movie-dangerous (Skynet from the "Terminator" series) but something far more insidious (a la Samaritan from the TV show "Person of Interest"). I'm hoping the designers are bright enough to be creating constraints within any AI they are programming right now.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.