Jump to content

Rational basis for morality?


Schrödinger's hat

Recommended Posts

Haven't fully fleshed out this idea, but here goes -- help, as well as arguments against Hank being capable of something we'd recognise as morality are welcome.

 

 

Assume there is a sentient entity that values rationality above all else. Ie. it will not act unless it believes its actions are either trivial in consequence, or rationally justified; call it Hank.

 

Hank has some other values, things that he wants to do, or likes and dislikes.

He also observes other entities that claim to have a sense of self, much like his. They also communicate that they have values and desires. These claims seem credible, or at least as credible as his own claims of consciousness and desires. Some of these desires conflict.

 

For some actions it will be true that Hank would say 'I want to do this' and some of the other entities would say 'I want you to not do that'.

 

In this case, is there any rational reason for Hank to act on his own desire over the desires of the others? -- I would answer no to this.

Is this sufficient to say that Hank must weigh the claims and values of the other entities against his own before acting, or must something akin to the principle of mediocrity be invoked?

 

If all the entities have a common value which they rank highest (with the exception of the acting rationality one) such as 'I desire not to be murdered' or 'I wish to know where all of the 17 grey pebbles are'. Can Hank ever justify murdering someone or hiding grey pebbles?

 

I suspect that there is a missing premise here, like an ordering principle that can be applied to values of different entities. If Hank knew he valued hiding pebbles more than all the other entities (individually? collectively? does this matter?) valued knowing where they were, I think he would be justified in hiding them.

 

The alternatives seem to be some global utility function, or something akin to empathy (ie. somehow knowing which of the other entities' values were more or less valued than your own). Both of these are somewhat arbitrary and do not follow from the premise that Hank is rational without additional premises.

Link to comment
Share on other sites

I think it can be asserted that morality is due to rational thought combined with empathy. If you understand that an action is not something you would have done to you then rationally you wouldn't do it. Morality is not complex to me, simply do not do anything to others you would not want done to you.

Link to comment
Share on other sites

I touch on the origin of ethics here (continuing in following posts), and the thread is worth at least skimming...

 

To 'rationally justify' an action requires the prediction of an outcome, which is pretty tricky for non-psychics. We generally have to assume the most likely outcome, while also being prepared for less likely ones. Because this process is so imperfect, we're forced to generate rules of behavior which we define as 'right and wrong'. These rules come from a statistical evaluation of the outcomes from a certain action. Take murder-- people are frightened they might be next, and tend towards preemptive counteraction. Now, this counteraction may itself be murder of the original murderer. This could be viewed as a perpetual cycle, but the 2nd murder is viewed as justified, because it both prevents the initial murderer from murdering again, and also sets a precedent that unjustified murderers will be murdered, serving to prevent future unjustified murders among all who are informed of the now-dubbed execution. People will still fear being executed, but they understand how to avoid it.

 

Hank can rationally justify actions so long as he can both cope with uncertainty and predict generalized outcomes of a type of action. He needn't weigh his values against those of others, but rather he must weigh his intrinsic reward from an action against all possible reactions with regard to probability while accounting for the effect of the precedent that action sets. If he takes all the gray pebbles and throws them at people while they're sleeping, earning himself the sadistic thrills of sneaking and pestering, he'll have to accept the possibilities of retaliation, should he be caught, and of any guilt, should anyone be hurt badly, and of getting pelted with green pebbles in his sleep by some creative copycat pebble-pitcher.

Link to comment
Share on other sites

Assume there is a sentient entity that values rationality above all else. Ie. it will not act unless it believes its actions are either trivial in consequence, or rationally justified; call it Hank.

 

Hank has some other values, things that he wants to do, or likes and dislikes.

He also observes other entities that claim to have a sense of self, much like his. They also communicate that they have values and desires. These claims seem credible, or at least as credible as his own claims of consciousness and desires. Some of these desires conflict.

 

For some actions it will be true that Hank would say 'I want to do this' and some of the other entities would say 'I want you to not do that'.

 

In this case, is there any rational reason for Hank to act on his own desire over the desires of the others? -- I would answer no to this.

 

I suspect that there is a missing premise here, like an ordering principle that can be applied to values of different entities. If Hank knew he valued hiding pebbles more than all the other entities (individually? collectively? does this matter?) valued knowing where they were, I think he would be justified in hiding them.

 

There are no such things as right or wrong they are only apparent, they don't exist in reality. If Hank chose what he had to choose and his other entities chose what they had to choose then both are equally right in their actions. The world is just a play and it is full of providence. We don't have free will.

Link to comment
Share on other sites

I think rational action is a poor standard for morality because rational action can generally be defined as action for the satisfaction of some desire of the acting agent. This, to me at least, seems a poor, even cavalier, starting point for a basis of morality because human desire is arguably infinite. It also then gets us involved in the discussion of 'what desires are valid?', and, ergo, into smoke and mirrors because the desires in the first place 'must be rational to be valid'. We enter a loop we can't get out of it seems.   

 

Personally i think a better starting is to assume that there's no confusion with morality per se, but only with what do we do with it. So rather than ask what is morality or how do we define an individual act as moral or not we should rather assume morality as a given (at least for social arguments concerning it), and then ask where do we want our morals and ethics to leads us. The question is an individual one as much as it is a collective one but i think this let's us phrase the dilemma better; rather than 'what is moral?' it is instead, 'what should be moral?' and why.  

 

As an aside, i'm not convinced of my own argument, but thought i'd say it anyway.

Edited by Vent
Link to comment
Share on other sites

Moontanman:

Thanks. This is close to the direction I was trying to take this.

I'm well aware that ethics can be built easily starting from rationality+empathy. I would also generalise the empathy concept a bit further. Do unto others as you would have them do unto you assumes a level of commonality in the values or utility functions of other people.

'Consider the values of others (where they appear to be internally consistent) to be as important as your own before deciding on an optimization condition' would be a more general version -- a sort of meta-empathy.

 

I can get to this starting from being an empathic creature (ie. I already value valuing other people's values), but I am trying to figure out whether hank could get here -- or at least see if he can find a way to not-empathy (I suspect he might become paralysed with inaction).

Morales are relative, what regard and disregard is your choice.

Questionposter: Morals are not entirely arbitrary. I agree that they have an individual component, and are generally calibrated by society -- but this may be because we are imperfect reasoners, or even that we are irrational.

If I say 'We should aim for a state of affairs where everyone, everywhere has maximum dis-utility1 and minimum utility.' (ie. suffering as badly as possible, not just in terms of pain or torture, but in terms of the thing they would think is worst) this is obviously an absurd goal.

At least it seems obvious and objective, can anyone think of a potential counter-argument?

 

Long post: Putting quotes in spoiler tags for brevity.

 

 

I touch on the origin of ethics here (continuing in following posts), and the thread is worth at least skimming...

 

To 'rationally justify' an action requires the prediction of an outcome, which is pretty tricky for non-psychics. We generally have to assume the most likely outcome, while also being prepared for less likely ones. Because this process is so imperfect, we're forced to generate rules of behavior which we define as 'right and wrong'. These rules come from a statistical evaluation of the outcomes from a certain action. Take murder-- people are frightened they might be next, and tend towards preemptive counteraction. Now, this counteraction may itself be murder of the original murderer. This could be viewed as a perpetual cycle, but the 2nd murder is viewed as justified, because it both prevents the initial murderer from murdering again, and also sets a precedent that unjustified murderers will be murdered, serving to prevent future unjustified murders among all who are informed of the now-dubbed execution. People will still fear being executed, but they understand how to avoid it.

 

Hank can rationally justify actions so long as he can both cope with uncertainty and predict generalized outcomes of a type of action. He needn't weigh his values against those of others, but rather he must weigh his intrinsic reward from an action against all possible reactions with regard to probability while accounting for the effect of the precedent that action sets. If he takes all the gray pebbles and throws them at people while they're sleeping, earning himself the sadistic thrills of sneaking and pestering, he'll have to accept the possibilities of retaliation, should he be caught, and of any guilt, should anyone be hurt badly, and of getting pelted with green pebbles in his sleep by some creative copycat pebble-pitcher.

 

 

Marqq, what you describe is selfishness or sociopathy coupled with social rules and/or laws, not morality.

It is also redundant to talk about a rationalist needing to act on subjective probabilities, noone can do anything else. They do not need to be psychic, only think of probable outcomes.

 

You have also jumped a step ahead in saying that Hank can rationally seek to optimize his own values, disregarding the others. Please explain how Hank can get from 'I value x' to taking a course of action which optimizes for x whilst disregarding all the other things which are valued.

 

The bit about the pebbles was meant to illustrate that other people's values can be different or even alien to our own. This does not make them invalid. You are also putting on many layers of human values, consequences of biology and society, and mixing the question of 'does this action suit my optimization criteria' with 'is this a suitable set of optimization criteria'.

 

Immortal: I do not wish to discuss free will or moral relativity in this thread. Only the possible links between 'I value these things', 'Other entities value some other things' and empathy.

 

 

I think rational action is a poor standard for morality because rational action can generally be defined as action for the satisfaction of some desire of the acting agent. This, to me at least, seems a poor, even cavalier, starting point for a basis of morality because human desire is arguably infinite. It also then gets us involved in the discussion of 'what desires are valid?', and, ergo, into smoke and mirrors because the desires in the first place 'must be rational to be valid'. We enter a loop we can't get out of it seems.

 

Personally i think a better starting is to assume that there's no confusion with morality per se, but only with what do we do with it. So rather than ask what is morality or how do we define an individual act as moral or not we should rather assume morality as a given (at least for social arguments concerning it), and then ask where do we want our morals and ethics to leads us. The question is an individual one as much as it is a collective one but i think this let's us phrase the dilemma better; rather than 'what is moral?' it is instead, 'what should be moral?' and why.

 

As an aside, i'm not convinced of my own argument, but thought i'd say it anyway.

 

 

Vent, the second part of your post just shifts from an arbitrary morality to an arbitrary meta-morality. I believe valuing empathy highly is a sufficient basis (meta-morality) on which to build something closely resembling accepted notions of right and wrong.

I am trying to decouple my own empathy from logic in order to see what is plugged in where.

Rational action is an acceptable basis for morality once suitable optimization criteria are established. You can pick an arbitrary utility function (even a subjective mix of global happiness, suffering directly caused and personal freedom) and maximize it. With empathy as a given you can even select a utility function that closely resembles the common notion of 'right'.

 

 

Other notes:

I do not want to discuss the how to get from rationality+empathy to a viable system of ethics yet (or whether or not you can). I believe the weakest link is rationality->empathy. If this fails then rationality->rationality+empathy->ethics/morality automatically fails, and so it makes no sense to discuss it until the first is reasonably settled.

 

Perhaps I should have titled the thread Rational basis for empathy. Answer on the basis that this is the title, at least until we have discussed it further.

 

1Local utility functions -- what each individual values most/least -- calibrating a global utility function is part of the problem I wish to discuss.

Edited by Schrödinger's hat
Link to comment
Share on other sites

Moontanman:

Thanks. This is close to the direction I was trying to take this.

I'm well aware that ethics can be built easily starting from rationality+empathy. I would also generalise the empathy concept a bit further. Do unto others as you would have them do unto you assumes a level of commonality in the values or utility functions of other people.

'Consider the values of others (where they appear to be internally consistent) to be as important as your own before deciding on an optimization condition' would be a more general version -- a sort of meta-empathy.

 

I can get to this starting from being an empathic creature (ie. I already value valuing other people's values), but I am trying to figure out whether hank could get here -- or at least see if he can find a way to not-empathy (I suspect he might become paralysed with inaction).

 

Questionposter: Morals are not entirely arbitrary. I agree that they have an individual component, and are generally calibrated by society -- but this may be because we are imperfect reasoners, or even that we are irrational.

If I say 'We should aim for a state of affairs where everyone, everywhere has maximum dis-utility1 and minimum utility.' (ie. suffering as badly as possible, not just in terms of pain or torture, but in terms of the thing they would think is worst) this is obviously an absurd goal.

At least it seems obvious and objective, can anyone think of a potential counter-argument?

 

Long post: Putting quotes in spoiler tags for brevity.

 

 

 

 

 

Marqq, what you describe is selfishness or sociopathy coupled with social rules and/or laws, not morality.

It is also redundant to talk about a rationalist needing to act on subjective probabilities, noone can do anything else. They do not need to be psychic, only think of probable outcomes.

 

You have also jumped a step ahead in saying that Hank can rationally seek to optimize his own values, disregarding the others. Please explain how Hank can get from 'I value x' to taking a course of action which optimizes for x whilst disregarding all the other things which are valued.

 

The bit about the pebbles was meant to illustrate that other people's values can be different or even alien to our own. This does not make them invalid. You are also putting on many layers of human values, consequences of biology and society, and mixing the question of 'does this action suit my optimization criteria' with 'is this a suitable set of optimization criteria'.

 

Immortal: I do not wish to discuss free will or moral relativity in this thread. Only the possible links between 'I value these things', 'Other entities value some other things' and empathy.

 

 

 

 

 

Vent, the second part of your post just shifts from an arbitrary morality to an arbitrary meta-morality. I believe valuing empathy highly is a sufficient basis (meta-morality) on which to build something closely resembling accepted notions of right and wrong.

I am trying to decouple my own empathy from logic in order to see what is plugged in where.

Rational action is an acceptable basis for morality once suitable optimization criteria are established. You can pick an arbitrary utility function (even a subjective mix of global happiness, suffering directly caused and personal freedom) and maximize it. With empathy as a given you can even select a utility function that closely resembles the common notion of 'right'.

 

 

Other notes:

I do not want to discuss the how to get from rationality+empathy to a viable system of ethics yet (or whether or not you can). I believe the weakest link is rationality->empathy. If this fails then rationality->rationality+empathy->ethics/morality automatically fails, and so it makes no sense to discuss it until the first is reasonably settled.

 

Perhaps I should have titled the thread Rational basis for empathy. Answer on the basis that this is the title, at least until we have discussed it further.

 

1Local utility functions -- what each individual values most/least -- calibrating a global utility function is part of the problem I wish to discuss.

 

I believe Paul Harvey summed it up best almost fifty years ago. I would think of him more a Seerologist had he not brought God and Satin into the mix.

Morally, today it's just, "don't get caught". Remember, this was broadcast in 1965.

Edited by rigney
Link to comment
Share on other sites

Rigney, that was completely off topic, and you quoted the entirety of a very long post which had little to no content relevant to your post.

The only thing it might have been related to is Marqq's post which I had already addressed in a less tangential fashion.

Please keep quiet if you have nothing relevant to the discussion, and try to keep the signal/noise ratio reasonable.

Link to comment
Share on other sites

Rigney, that was completely off topic, and you quoted the entirety of a very long post which had little to no content relevant to your post.

The only thing it might have been related to is Marqq's post which I had already addressed in a less tangential fashion.

Please keep quiet if you have nothing relevant to the discussion, and try to keep the signal/noise ratio reasonable.

 

Quote:#224 March 2012 - 08:57 AMMoontanman Scientist

I think it can be asserted that morality is due to rational thought combined with empathy. If you understand that an action is not something you would have done to you then rationally you wouldn't do it. Morality is not complex to me, simply do not do anything to others you would not want done to you.

 

I wasn't trying to wreck your train of thought or derail your intent. But when Moon went to the morality and empathy thing, I just thought 3 mins. of why the issue is in question at all might fit in. I and Paul Harvey both, do apologise. Edited by rigney
Link to comment
Share on other sites

I touch on the origin of ethics here (continuing in following posts), and the thread is worth at least skimming...

 

To 'rationally justify' an action requires the prediction of an outcome, which is pretty tricky for non-psychics. We generally have to assume the most likely outcome, while also being prepared for less likely ones. Because this process is so imperfect, we're forced to generate rules of behavior which we define as 'right and wrong'. These rules come from a statistical evaluation of the outcomes from a certain action. Take murder-- people are frightened they might be next, and tend towards preemptive counteraction. Now, this counteraction may itself be murder of the original murderer. This could be viewed as a perpetual cycle, but the 2nd murder is viewed as justified, because it both prevents the initial murderer from murdering again, and also sets a precedent that unjustified murderers will be murdered, serving to prevent future unjustified murders among all who are informed of the now-dubbed execution. People will still fear being executed, but they understand how to avoid it.

 

Hank can rationally justify actions so long as he can both cope with uncertainty and predict generalized outcomes of a type of action. He needn't weigh his values against those of others, but rather he must weigh his intrinsic reward from an action against all possible reactions with regard to probability while accounting for the effect of the precedent that action sets. If he takes all the gray pebbles and throws them at people while they're sleeping, earning himself the sadistic thrills of sneaking and pestering, he'll have to accept the possibilities of retaliation, should he be caught, and of any guilt, should anyone be hurt badly, and of getting pelted with green pebbles in his sleep by some creative copycat pebble-pitcher.

 

Marqq - your post is almost entirely teleological, based on the ends or predicted results of one's actions. Do you give any weight to the notion of a duty element - that some actions are morally compulsive because of the act in and of itself

 

Haven't fully fleshed out this idea, but here goes -- help, as well as arguments against Hank being capable of something we'd recognise as morality are welcome.

 

 

Assume there is a sentient entity that values rationality above all else. Ie. it will not act unless it believes its actions are either trivial in consequence, or rationally justified; call it Hank.

 

Hank has some other values, things that he wants to do, or likes and dislikes.

He also observes other entities that claim to have a sense of self, much like his. They also communicate that they have values and desires. These claims seem credible, or at least as credible as his own claims of consciousness and desires. Some of these desires conflict.

 

For some actions it will be true that Hank would say 'I want to do this' and some of the other entities would say 'I want you to not do that'.

 

In this case, is there any rational reason for Hank to act on his own desire over the desires of the others? -- I would answer no to this.

Is this sufficient to say that Hank must weigh the claims and values of the other entities against his own before acting, or must something akin to the principle of mediocrity be invoked?

 

If all the entities have a common value which they rank highest (with the exception of the acting rationality one) such as 'I desire not to be murdered' or 'I wish to know where all of the 17 grey pebbles are'. Can Hank ever justify murdering someone or hiding grey pebbles?

 

I suspect that there is a missing premise here, like an ordering principle that can be applied to values of different entities. If Hank knew he valued hiding pebbles more than all the other entities (individually? collectively? does this matter?) valued knowing where they were, I think he would be justified in hiding them.

 

The alternatives seem to be some global utility function, or something akin to empathy (ie. somehow knowing which of the other entities' values were more or less valued than your own). Both of these are somewhat arbitrary and do not follow from the premise that Hank is rational without additional premises.

 

It seems Hank first rule is easy "I will do anything (and thus everything) I have a positive desire to (and forbear from doing anything with a negative desire) provided that I do not have a reasonable apprehension that it will cause harm, discomfort or unhappiness to another". If the proviso is not fulfilled then Hank has to get creative - and clearly there is no single answer.

 

You can posit second rules that are more or less interesting and relevant

"I will do that thing I like, even if it will harm another - as long as I will not be sanctioned"

"I will do that thing I like, even if it will harm another - and I will compensate those that have been harmed with money or favour"

"I will do that thing I like, even if it will harm another - provided that in a calculus of happiness, my positive feelings outweigh her negative"

"I will do that thing I like, even if it will harm another - if I believe that the importance of my action to our community overcomes quibbles from nimbys"

"I will do that thing I like, even if it will harm another - because it is the right thing to do"

"I will do that thing I like, even if it will harm another - everyone else does it"

"I will do that thing I like, even if it will harm another - as we, the majority, decided a that it was a good thing and objections could be ignored"

 

\edit format

Edited by imatfaal
Link to comment
Share on other sites

It seems Hank first rule is easy "I will do anything (and thus everything) I have a positive desire to (and forbear from doing anything with a negative desire) provided that I do not have a reasonable apprehension that it will cause harm, discomfort or unhappiness to another". If the proviso is not fulfilled then Hank has to get creative - and clearly there is no single answer.

Hmm. So this is basically a 'do no harm' rule. This resonates well with what I think and what I've read (the minecart, lever, fat man scenario comes to mind).

This brings up the question of whether harm and lack-of-benefit (or utility and dis-utility) are actually distinct.

Another phrasing might be: Should Hank take a (potentially or knowably) sub-optimal action?

I believe the answer is yes, but I do not know why I believe this.

A relevant parable might be someone who starves to death at a banquet because they do not know which food is the best.

I also suspect we may have inserted utilitarianism too quickly.

 

What if the values (Hank's own, and others') lack any sort of norm or notion of positive/negative? Hank can rank or order his own values, but not those of others.

I suspect this may be heading towards the overly general swing of the pendulum. He could probably just ask the other entities 'do you mind if I do x'.

 

(added numbers to quote for reference)

You can posit second rules that are more or less interesting and relevant

1."I will do that thing I like, even if it will harm another - as long as I will not be sanctioned"

2."I will do that thing I like, even if it will harm another - and I will compensate those that have been harmed with money or favour"

3."I will do that thing I like, even if it will harm another - provided that in a calculus of happiness, my positive feelings outweigh her negative"

4."I will do that thing I like, even if it will harm another - if I believe that the importance of my action to our community overcomes quibbles from nimbys"

5."I will do that thing I like, even if it will harm another - because it is the right thing to do"

6."I will do that thing I like, even if it will harm another - everyone else does it"

7."I will do that thing I like, even if it will harm another - as we, the majority, decided a that it was a good thing and objections could be ignored"

 

Hmm, these are certainly the more interesting questions. I think I see a few categories.

1, and possibly 4 are justified by self-interest. Not really worth persuing that reason at this point.

2, 4, 6, 7 are justified by consensus in one way or another 2 by the consensus of the individual, others by the group.

3, 5, and possibly 4 require an additional layer. Some kind of empathy, global utility function, or another (possibly arbitrary) condition for 'rightness'. They could also be justified by conforming with Hank's values.

 

This raises an interesting thought about whether 'morality' is actually a set of distinct concepts. I shall ponder and post more later.

Link to comment
Share on other sites

Haven't fully fleshed out this idea, but here goes -- help, as well as arguments against Hank being capable of something we'd recognise as morality are welcome.

 

 

Assume there is a sentient entity that values rationality above all else. Ie. it will not act unless it believes its actions are either trivial in consequence, or rationally justified; call it Hank.

 

Hank has some other values, things that he wants to do, or likes and dislikes.

He also observes other entities that claim to have a sense of self, much like his. They also communicate that they have values and desires. These claims seem credible, or at least as credible as his own claims of consciousness and desires. Some of these desires conflict.

 

For some actions it will be true that Hank would say 'I want to do this' and some of the other entities would say 'I want you to not do that'.

 

In this case, is there any rational reason for Hank to act on his own desire over the desires of the others? -- I would answer no to this.

Is this sufficient to say that Hank must weigh the claims and values of the other entities against his own before acting, or must something akin to the principle of mediocrity be invoked?

 

If all the entities have a common value which they rank highest (with the exception of the acting rationality one) such as 'I desire not to be murdered' or 'I wish to know where all of the 17 grey pebbles are'. Can Hank ever justify murdering someone or hiding grey pebbles?

 

I suspect that there is a missing premise here, like an ordering principle that can be applied to values of different entities. If Hank knew he valued hiding pebbles more than all the other entities (individually? collectively? does this matter?) valued knowing where they were, I think he would be justified in hiding them.

 

The alternatives seem to be some global utility function, or something akin to empathy (ie. somehow knowing which of the other entities' values were more or less valued than your own). Both of these are somewhat arbitrary and do not follow from the premise that Hank is rational without additional premises.

 

The answer is whatever whoever decides for themselves. There's no "real" reason for one to do something over the other or not. And besides, why would they desire anyway? It's seems irrational to desire and cause all of this trouble when if you just get rid of desire you don't suffer.

Link to comment
Share on other sites

.... I already value valuing other people's values...

Does Hank? Or are you asking for a way to rationalize that value? Until the influence of others becomes significant to Hank as an extrinsic influence, that value is not rational.

 

Marqq, what you describe is selfishness or sociopathy coupled with social rules and/or laws, not morality.

It is also redundant to talk about a rationalist needing to act on subjective probabilities, noone can do anything else. They do not need to be psychic, only think of probable outcomes.

You're assuming that morality exists apart from consequence (as though it were arbitrary). Even in duty ethics, one's obligations exist to place the burden of fault. Virtue ethics falls into similar consequentialism because each virtue can only rationally be identified by its effect.

I only went into acting based on predicted outcomes to point out the often-forgotten effect of the precedence set. In my example of murder being committed as execution, the precedence of righteous murder was a dangerous distinction, because others who justify it later might not be so rational. If murder is right sometimes, and people know it through precedence, it becomes an easier thing to rationalize. The 'psychic' comment was intended to further discourage actions that are considered wrong, even in extenuating circumstances where the action seems justified. Breaking a basic ethical rule always risks unpredictable consequences.

 

You have also jumped a step ahead in saying that Hank can rationally seek to optimize his own values, disregarding the others. Please explain how Hank can get from 'I value x' to taking a course of action which optimizes for x whilst disregarding all the other things which are valued.

I was actually assuming Hank was not an empathic entity. Is Hank a rational sociopath looking for a reason for morality/empathic considerations or is Hank really a human living in a structured society with values instilled in him arbitrarily by heredity and environmental stimuli?

 

Other notes:

I do not want to discuss the how to get from rationality+empathy to a viable system of ethics yet (or whether or not you can). I believe the weakest link is rationality->empathy. If this fails then rationality->rationality+empathy->ethics/morality automatically fails, and so it makes no sense to discuss it until the first is reasonably settled.

 

Perhaps I should have titled the thread Rational basis for empathy. Answer on the basis that this is the title, at least until we have discussed it further.

Rationality + Social Environs(with emotional beings that tend toward reacting to pain/loss in kind against the source of pain/loss) + a Collective General Understanding of Values = Empathy as a necessary consideration and mechanism to avoid pain/loss through retaliation (In humans and many other life forms, this has become genetically prevalent as a survival mechanism--de Waal 2008)

 

This is assuming Hank dislikes pain/loss.

 

Marqq - your post is almost entirely teleological, based on the ends or predicted results of one's actions. Do you give any weight to the notion of a duty element - that some actions are morally compulsive because of the act in and of itself

Duty ethics is just egotistical utilitarianism. Deontologists say, "I am obliged to ____," because they want to be seen (by themselves and others) to be someone who acts responsibly, i.e., has integrity. The drive for integrity is an effect of social and internal pressure that is culturally instilled. Suppose you were the only living entity on earth; do you suppose you'd ever feel the drive toward or against any act, save for your own personal consequence?

 

The answer is whatever whoever decides for themselves. There's no "real" reason for one to do something over the other or not. And besides, why would they desire anyway? It's seems irrational to desire and cause all of this trouble when if you just get rid of desire you don't suffer.

"♪If you choose not to decide, you still have made a choice.♫" There will always be both internal and external influences that will cause you either pleasure or pain, unless you're dead (presumably). It's apparent that the desire for pleasure, or the lack of pain, could be removed, but then what would be the point to anything? Pleasure and pain are our intrinsic reasons for everything, and the basis of rationality, morality/ethics, empathy and survival. They're the only two things that actually drive us.

Edited by Marqq
Link to comment
Share on other sites

Valuing empathy doesn't necessitate action, or give a reason to act one way or another in any situation when you find yourself with a particular disposition. Acting on empathy could actually often be irrational. What is the reason to value empathy? For it to be rational to be empathic we need a reason why we should value empathy.

Link to comment
Share on other sites

 

 

"♪If you choose not to decide, you still have made a choice.♫" There will always be both internal and external influences that will cause you either pleasure or pain, unless you're dead (presumably). It's apparent that the desire for pleasure, or the lack of pain, could be removed, but then what would be the point to anything? Pleasure and pain are our intrinsic reasons for everything, and the basis of rationality, morality/ethics, empathy and survival. They're the only two things that actually drive us.

 

The universe doesn't seem to care about anything, so the point of anything is whatever we decide for ourselves.

Link to comment
Share on other sites

Does Hank? Or are you asking for a way to rationalize that value? Until the influence of others becomes significant to Hank as an extrinsic influence, that value is not rational.

Hank does not, or at least he does not know whether that should fit under the heading of 'be rational' or possibly 'rationally select an optimization criterion'.

By the same principle, he is not yet self interested. He has not yet picked personal gain as an optimization criterion.

He has only noted that he likes some arrangements for the universe, and others like some different arrangements for the universe. He is trying to decide whether he can justify attempting to arrange the universe in the way that he likes ie. rank his values as more important than other values he encounters.

He has no explicit reason to think the things he likes are more important

He is intelligent enough to discover what the values of the other entities are, so in this sense he has empathy.

 

You're assuming that morality exists apart from consequence (as though it were arbitrary). Even in duty ethics, one's obligations exist to place the burden of fault. Virtue ethics falls into similar consequentialism because each virtue can only rationally be identified by its effect.

I'm trying not to assume that arbitary morals exist independantly of an entity with those arbitrary morals. I think part of the issue may be the vagueness of the word 'morals'.

 

I was actually assuming Hank was not an empathic entity. Is Hank a rational sociopath looking for a reason for morality/empathic considerations or is Hank really a human living in a structured society with values instilled in him arbitrarily by heredity and environmental stimuli?

I am attempting to generalise; to throw as many pieces of the puzzle away and see what can be build with the remaining ones. We can perhaps add a few pieces if we find we cannot construct anything later.

Hank is quite capable of detecting the values/emotions/etc of the other entities (we can assume he's not being decieved by them for now), so in that sense he is empathic, but he does not intrinsicly value optimizing for them. By the same token he has no reason to think he is special or overly deserving of having his values fulfilled.

 

 

Rationality + Social Environs(with emotional beings that tend toward reacting to pain/loss in kind against the source of pain/loss) + a Collective General Understanding of Values = Empathy as a necessary consideration and mechanism to avoid pain/loss through retaliation (In humans and many other life forms, this has become genetically prevalent as a survival mechanism--de Waal 2008)

Yes, empathy was selected for by increasing survival rates of creatures (and relatives of creatures) that had it. It is not always necessary or sufficient to avoid pain/loss though.

An absurd but illustrative example would be a case where someone had everything they wanted/needed for their self (but not a large surplus), in a world filled with people who were constantly in terrible pain and suffering. Their empathy will cause them great distress, acting on their empathy could lead to suffering (by depleting their resources).

 

This is assuming Hank dislikes pain/loss.

Let's call whatever it is he dislikes pain/loss.

 

 

 

Duty ethics is just egotistical utilitarianism. Deontologists say, "I am obliged to ____," because they want to be seen (by themselves and others) to be someone who acts responsibly, i.e., has integrity. The drive for integrity is an effect of social and internal pressure that is culturally instilled. Suppose you were the only living entity on earth; do you suppose you'd ever feel the drive toward or against any act, save for your own personal consequence?

 

"♪If you choose not to decide, you still have made a choice.♫" There will always be both internal and external influences that will cause you either pleasure or pain, unless you're dead (presumably). It's apparent that the desire for pleasure, or the lack of pain, could be removed, but then what would be the point to anything? Pleasure and pain are our intrinsic reasons for everything, and the basis of rationality, morality/ethics, empathy and survival. They're the only two things that actually drive us.

 

 

Yes, ethics and morals require other creatures...not entirely sure of the point.

 

 

Valuing empathy doesn't necessitate action, or give a reason to act one way or another in any situation when you find yourself with a particular disposition. Acting on empathy could actually often be irrational. What is the reason to value empathy? For it to be rational to be empathic we need a reason why we should value empathy.

Hank can reason, one of the things he has to reason before he will act is selecting a set of optimization criteria. If empathy were one of his values, something along the lines of 'the best comprimise between everyone's values' would be a good optimization criterion.

 

 

Questionposter: Yes, values are arbitrary, but you cannot just edit them at a whim. Reason can temper and alter them, especially when they start off inconsistent. 'There are some entities with some values' is a premise of this thread, so pointing out that they're arbitrary isn't very helpful.

Link to comment
Share on other sites

Hank can reason, one of the things he has to reason before he will act is selecting a set of optimization criteria. If empathy were one of his values, something along the lines of 'the best comprimise between everyone's values' would be a good optimization criterion.

 

Yes, but above you said you "believed the weakest link was rationality -> empathy" and said that you think you should have titled the thread "rational basis for empathy". How do we get empathy from being rational?

 

Later you've taken empathy as a given in Hank and asked how "he can justify attempting to arrange the universe in the way that he likes", i.e. 'how does Hank justify his actions as moral when they are based on empathy?', but i didn't think we were discussing this as i didn't think empathy was a given in Hank, but rather only rationality was.

Link to comment
Share on other sites

  • 1 month later...

 

I suspect that there is a missing premise here, like an ordering principle that can be applied to values of different entities. If Hank knew he valued hiding pebbles more than all the other entities (individually? collectively? does this matter?) valued knowing where they were, I think he would be justified in hiding them.

 

 

In all honesty, the values of the other entities may not even be considered by Hank unless we are assuming he has some sort of desire to take those values into his line of thought. His train of thought may be as simple as

I want to do X.

I will do X.

 

or I will do X as long as nothing bad is going to happen to me.

 

However, this stage 1 Kohlberg approach may not ever be applicable for the purely rational being. The TV show Bones touched on this concept in the story arc involving the Gorgomon character. A purely rational thinker is led to the conclusion based solely on rational argument that killing and eating people is the right thing to do, even though that character is aware of the possible consequences of that behavior.

Link to comment
Share on other sites

For whatever moral basis there is, there is something else with an equal and opposite view. This is why no particular moral is right or wrong, and why it is ultimately up to free will. Let's say you feed a starving human. With what? a plant will require sacrificing it's offspring or itself, or an animal gives up it's life. In that instance, for one thing it is beneficial to live, and for the other it is beneficial for that same thing to die.

Edited by questionposter
Link to comment
Share on other sites

Haven't fully fleshed out this idea, but here goes -- help, as well as arguments against Hank being capable of something we'd recognise as morality are welcome.

 

 

Assume there is a sentient entity that values rationality above all else. Ie. it will not act unless it believes its actions are either trivial in consequence, or rationally justified; call it Hank.

 

Hank has some other values, things that he wants to do, or likes and dislikes.

He also observes other entities that claim to have a sense of self, much like his. They also communicate that they have values and desires. These claims seem credible, or at least as credible as his own claims of consciousness and desires. Some of these desires conflict.

 

For some actions it will be true that Hank would say 'I want to do this' and some of the other entities would say 'I want you to not do that'.

 

In this case, is there any rational reason for Hank to act on his own desire over the desires of the others? -- I would answer no to this.

Is this sufficient to say that Hank must weigh the claims and values of the other entities against his own before acting, or must something akin to the principle of mediocrity be invoked?

 

If all the entities have a common value which they rank highest (with the exception of the acting rationality one) such as 'I desire not to be murdered' or 'I wish to know where all of the 17 grey pebbles are'. Can Hank ever justify murdering someone or hiding grey pebbles?

 

I suspect that there is a missing premise here, like an ordering principle that can be applied to values of different entities. If Hank knew he valued hiding pebbles more than all the other entities (individually? collectively? does this matter?) valued knowing where they were, I think he would be justified in hiding them.

 

The alternatives seem to be some global utility function, or something akin to empathy (ie. somehow knowing which of the other entities' values were more or less valued than your own). Both of these are somewhat arbitrary and do not follow from the premise that Hank is rational without additional premises.

 

He can consider the power these other beings have to help him or stop him from behaving the way he chooses to, their power when they join forces and how they have joined forces, empathy is also an important matter but without it the power analysis complements the rational cost-benefit analysis to determine his morality.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.