Jump to content

It's time to stop killing meat and start growing it


bascule

Recommended Posts

  • Replies 107
  • Created
  • Last Reply

Top Posters In This Topic

The meat most people eat comes from animals that have never seen sunlight' date=' let along green fields, they are also kept in pens so small they can’t turn around, so I don’t expect they do much walking either.

[/quote']

 

That's not true. It may be true of chicken, but it is not true of meat in general.

Link to comment
Share on other sites

Mokele,

 

Good morning, Mokele :)

Within the confine of a logical ethical system, yes, though I still have doubts about the necessity of a logical system.

I'm not really sure why you would question why ethics need to logical, because anything follows from a logical contradiction, and so there can be no rational reason why people should behave morally. (Of course, I'm just being pedant, but I think its really interesting the way you allow that moral systems can be internally inconsistent and contradictory, so that they could follow from logical or intuitional propositions, but at the same time you rejected some of my comments for being intuitional ;) )

 

But that's based on emotive reasoning; we recoil at those analogies not because there's anything wrong with killing babies (Try New Soylent Veal!!)' date=' but because we're programmed to be revulsed at killing babies.

 

Is killing babies really inherently wrong? What's wrong with the way the ancients did it, leaving unwanted or unviable kids for the wolves, other than our instinctual response against it?

 

This is what I mean by our morality arising from our evolution; your arguement is convincing to a human, but if you told it to a sentient monitor lizard, it'd laugh (and probably eat a baby lizard for lunch).

 

So, weird as it sounds, what's wrong with killing babies?[/quote']

I'm not using strictly emotive reasoning, because usually I find its easiest to reason with people using the beliefs they already hold. Most people take it as an axiomatic truth that killing babies is morally wrong, so that making a case for animal rights doesnt usually force me to explain why killing babies is wrong, but only show that animals and babies are relevantly similar, so that the moral beliefs about babies extend to include animals. (And its not often that I'm put on the spot to prove moral claims for monitor lizards.)

 

But generally, infants lack a huge number of morally relevant traits that we usually consider valuable, such as rationality, a capacity to make plans, aspire after goals, practice moral reciprocity, and so on. But at the very least, they have the capacity to feel pain, feel happiness, have an experiential welfare, and those kinds of experiences are valuable and worth protecting for just that much.

 

The preservation of experiential welfare is a good thing, so an infants experiential welfare constrains a number of permissible actions you can do to the infant (i.e. because suffering has moral disvalue, you cant intentionally torture the infant). We're obligated to maximize the moral good we cause, so we should protect an infants experiential welfare to the fullest extent, which means preserving its continued happiness and keeping the infant free from harm.

 

Because infants have a moral value determined by their experiential welfare, they have interests which can be weighed against the interests of others, where interests are an umbrella term of all of the morally relevant characteristics of a being. In this case, lets imagine a sadist who derives happiness from killing, and lets imagine that the sadist wants to kill a baby for kicks; the happiness that the sadist gets from killing has moral value, but reducing down the experiential welfare of the infant has moral disvalue. So, lets take killing and weigh it according to some utilitarian principles, where we try to maximize the satisfaction of interests over dissatisfaction of interests:

Lets say that preference satisfaction is an interest in much the same way that happiness is an interest, so preference satisifaction has intrinsic value. And preference dissatisfaction has intrinsic disvalue.

 

A killer's interest in killing is weighed against an infants interest in continued existence and preservation of its experiential welfare.

 

A killer derives happiness from killing, but the happiness from killing is fleeting and temporary (perhaps lasting only a few minutes) then it means nothing. Because the satisfaction of an infants interests logically depend on its continued existence, then killing an infant dissatisfies
all
of an interests interests in the most absolute sense: by reducing them down to nothingness; more importantly, the harm cause is permanent and unrecoverable. The killers interest are satisfied in the most trivial sense, and the infants interests are harmed in the most profound and absolute sense.

 

A killer derives unhappiness from being thwarted, but the unhappiness is fleeting and temporarly; but more importantly, the unhappiness is recoverable, because there are many things that could make the killer happy that dont involve killing, such as playing an XBox or investing in the stock market. An infant isnt harmed at all for being killed, and their interests are satisfed. A killers interests are satisified, and so are the infants.

(^^^^ so thats just a small piece of my very complicated moral philosophy, did you enjoy :) I dont mind conceding that, at the very least, its hard to weigh interests precisely against one another, but we're rational creatures who have the capacity to make reasonable judgements and put ourselves in others shoes, so we can weigh relative harms with a fair amount of certainty and minimize the harm we cause whenever possible.)

 

So, in this way, killing causes profoundly more harm than not killing, and we can ultimately satisfy more interests and maximize moral good by preventing people from killing babies. Ultimately, it comes down to a moral imperative to minimize the harm that we cause to beings. I dont really think its practical to maximize the satisfaction of interests and give everyone a manshion and swimming pool and donate our entire net worth to charity; but it takes considerably less effort to minimize the harm we cause to beings, and it doesnt take much to avoid killing them whenever possible, and I think its reasonable to presume that we are morally obliged to do at least that much.

 

Theres only a few cases where killing babies would be justified, such as in the case of euthanasia for terminally ill infants. If the infant were born with a severe terminal illness, where it would suffer for months before it eventually dies, then I think euthanasia is justified, because keeping the infant in a state like that would be unforgivably cruel.

 

 

 

Severian,

I think killing a fully grown chimpanzee is worse than killing a few week/month old human baby.

Are you trolling with that statement' date=' or do you genuinely believe that?[/quote']

Actually, I think Bascule was quoting Jeremy Bentham:

The day may come when the rest of the animal creation may acquire those rights which never could have been witholden from them but by the hand of tyranny. The French have already discovered that the blackness of the skin is no reason why a human being should be abandoned without redress to the caprice of a tormentor. It may one day come to be recognized that the number of the legs, the villosity of the skin, or the termination of the os sarrum, are reasons equally insufficient for abandoning a sensitive being to the same fate. What else is it that should trace the insuperable line? Is it the faculty of reason, or perhaps the faculty of discourse?
But a full-grown horse or dog is beyond comparison a more rational, as well as a more conversable animal, than an infant of a day, or a week, or even a month, old.
But suppose they were otherwise, what would it avail? The question is not, Can they reasons nor Can they talk? but, Can their suffer?

I agree with Bascule, and I'd be interested to hear on what basis you object to Bascules comment.

Link to comment
Share on other sites

I'm not really sure why you would question why ethics need to logical, because anything follows from a logical contradiction, and so there can be no rational reason why people should behave morally.

 

Well, think of it like vision. We don't see the world around us perfectly. Aside from a pair of blind spots (thanks to the retina being on backwards), we most percieve only a small fraction of our true field of view, and the brain fills in most of the rest automatically. That's why camoflage makes someone literally invisible: if your brain doesn't register them as different from the background, it just 'paints over them' like it usually does. Movement causes you to focus on them, which breaks the illusion.

 

Anyhow, the point is that a perfect system would be a) too complicated and prone to damage b) too expensive in terms of processing time and stuch and c) only a marginal improvement.

 

I sort of see ethics the same way; we come with 'built in' ethics thanks to millions of years of life as a troop primate. More complex and perfect structures can work, but really, how much advantage do those systems offer? Most people use the factory-bundled nerveware and do pretty well; a few ****ups here and there, but generally OK.

 

Think of it like Linux vs Windows. The former is a LOT better, but a lot harder to use, and for the average Joe User, would offer only minimal advantages. So why switch? Is the occaisional Blue Screen of Death worse than having to mount and unmount your damn drive every time you want to put a disk in?

 

But at the very least, they have the capacity to feel pain, feel happiness, have an experiential welfare, and those kinds of experiences are valuable and worth protecting for just that much.
The preservation of experiential welfare is a good thing

 

Why? We're back to underlying assumptions. Why is preservation of underlying experiential welfare a good thing?

 

If we wanted to preserve experiental welfare, we'd abolish the IRS. ;)

 

A killer derives happiness from killing, but the happiness from killing is fleeting and temporary (perhaps lasting only a few minutes) then it means nothing. Because the satisfaction of an infants interests logically depend on its continued existence, then killing an infant dissatisfies all of an interests interests in the most absolute sense: by reducing them down to nothingness; more importantly, the harm cause is permanent and unrecoverable. The killers interest are satisfied in the most trivial sense, and the infants interests are harmed in the most profound and absolute sense.

 

But why is fleeting or temporary pleasure somehow less valuable? Take sex. Fleeting, but lots of people forgo longer-duration lesser pleasure for it. Can we truly judge the moral value of something merely by duration?

 

So, in this way, killing causes profoundly more harm than not killing, and we can ultimately satisfy more interests and maximize moral good by preventing people from killing babies. Ultimately, it comes down to a moral imperative to minimize the harm that we cause to beings.

 

But that's only a 1:1 comparison. What if one death reduces suffering among many? Say, if killing one baby would allow 8 people to extend their lives by 10 years (since 75 is mean age anyway in the US).

 

Mokele

Link to comment
Share on other sites

Let me ask a question to Bascule and IMM: if you had to choose between the life of an adult chimpanzee and your own[/b'] baby child, which would you choose?

 

I'd save my own child. However, whether or not such an action is moral is a different question entirely.

Link to comment
Share on other sites

Mokele,

Well' date=' think of it like vision. We don't see the world around us perfectly. Aside from a pair of blind spots (thanks to the retina being on backwards), we most percieve only a small fraction of our true field of view, and the brain fills in most of the rest automatically. That's why camoflage makes someone literally invisible: if your brain doesn't register them as different from the background, it just 'paints over them' like it usually does. Movement causes you to focus on them, which breaks the illusion.

 

Anyhow, the point is that a perfect system would be a) too complicated and prone to damage b) too expensive in terms of processing time and stuch and c) only a marginal improvement.

 

I sort of see ethics the same way; we come with 'built in' ethics thanks to millions of years of life as a troop primate. More complex and perfect structures can work, but really, how much advantage do those systems offer? Most people use the factory-bundled nerveware and do pretty well; a few ****ups here and there, but generally OK.

 

Think of it like Linux vs Windows. The former is a LOT better, but a lot harder to use, and for the average Joe User, would offer only minimal advantages. So why switch? Is the occaisional Blue Screen of Death worse than having to mount and unmount your damn drive every time you want to put a disk in?[/quote']

Please forgive me, but I think the point of everything you just said passed me by :)

 

Why? We're back to underlying assumptions. Why is preservation of underlying experiential welfare a good thing?

Think about it: all the morally relevant characteristics a being has almost always refer to the beings mental and feeling capacities. For instance, capacity to feel pain and pleasure, capacity to be rational, capacity to empathize with others, capacity to seek long term goals, and so on are direct statements about a beings mental and feeling capacities. As well, almost all moral actions you do to a being have to do with how you affect the being either directly or indirectly, which refers back to a beings mental and feeling experiences (which is lumped under the umbrella term "experiential welfare").

Add to that, all of the morally relevant capacities a being has depend directly on its continued existence, so respecting a beings experiential welfare is a prerequisite to respecting all of its other morally relevant characteristics.

 

I hope you dont take this as shifting the burden of moral discourse, but if you dont think preserving a beings experiential welfare is a valid moral principle, whats your alternative?

 

If we wanted to preserve experiental welfare, we'd abolish the IRS. ;)

Would you like to read some philosophers who argue exactly that? ;)

 

But why is fleeting or temporary pleasure somehow less valuable? Take sex. Fleeting, but lots of people forgo longer-duration lesser pleasure for it. Can we truly judge the moral value of something merely by duration?

You tell me, do you think its worse to spank a child for 5 seconds or 5 hours? ;)

 

But in any case, the whole point with fleeting pleasures, in a utilitarian sense, is that a temporary pleasure doesnt maximize the amount of good as longer lasting pleasures. If you dont mind a crude display, heres a chart showing the goodness of 4 pleasures, and how they aggregate with one another (aggregate pleasures are stacked on top of each other when they occur in the same time frame):

@ = pleasure1, occurs at time=0
# = pleasure2, occurs at time=4
$ = pleasure3, occurs at time=12
% = pleasure4, occurs at time=16


First case, where @, #, $, % last for about 5 seconds each


                                 }
                                 }
                                 }
                                 }  Aggregate good
                                 }
  ##           %                 }
@@@@@###    $$$$$%%%%             }
0    5    10   15   20   25   30     
           seconds



Second case, where @, #, $, % last for about 15 seconds each



                                 }
                                 }
                                 }
                                 }  Aggregate good
           $$$ %%                }
  ############$$$%%%%%%%%        }
@@@@@@@@@@@@@@@###$$$$$$$$%%%%%   }
0    5    10   15   20   25   30     
           seconds

You get a greater aggregate good, and a higher average good for any given time for longer lasting pleasures than fleeting pleasures, so longer lasting pleasures maximize moral good.

 

But that's only a 1:1 comparison. What if one death reduces suffering among many? Say, if killing one baby would allow 8 people to extend their lives by 10 years (since 75 is mean age anyway in the US).

Each person suffers the harms and benefits of only one person, so its rational to consider the harms and benefits of our actions an individual per individual basis. So it doesnt matter that one death decreases the suffering of many, because that single death must be compared against the benefit of each single individual. Philosophers Tom Regan, explains this kind of comparison and the reasons for it a lot of depth (if you're interesting, you can pick up the book "The Case for Animal Rights" and give it a read, and see how he argues it). We dont aggregate the benefit of all the individuals affected, but compare them as individuals, so we're still effectively making a 1:1 comparison; there are reasons to prefer this kind of moral reasoning, such as to avoid the akwardness of saying "its ok to murder one person if their organs will save 6 others".

 

Lets imagine a TV show that features someone torturing a baby, and the show entertains millions and millions of people. The harm caused to the baby is more profound than the enjoyment of any single television viewer, so we can conclude that torturing the baby is morally wrong, no matter how many people benefit. (We can also say that the viewers enjoyment is fleeting and temporary, and that there are less cruel way to entertain people, like watching Law and Order: SVU, so we have a few additional reasons not to torture people on TV.)

 

The same principles apply to the scenario you defined, but it also entails a number of other interesting moral questions: if its ok to destroy a life to save another, would you say that murder is justified so long as another life is brought into existence immediately afterward? For instance, if a woman is pregnant, she could get away with murder on the grounds that she is having a baby to replace the person who she has killed. I presume you arent very comfortable with this, and for your own reasons you reject it as morally sound.

 

Of course, you also have to take into consideration, when you kill a being to save others, you deliberately harm it and do it moral wrong; but when you try to save a person, without killing others, you do no one anymore moral wrong at all, even if your patient doesnt survive. From the point of view of minimizing the harm we cause, we arent justified in murdering others to for anyones benefit, so that kind of practice is morally wrong. And taken with consideration of the "individual per individual" calculation above, and the fact that murder is not justified by bringing others into existence, then we have a strong cumulative argument for the categorical abolition of all killing for the benefit of others.

 

 

 

 

Severian,

Let me ask a question to Bascule and IMM: if you had to choose between the life of an adult chimpanzee and your own baby child, which would you choose?

I value the chimpanzee just as much as I'd value my own children, but the chimpanzee has more morally relevant characteristics to take into consideration, and in the interest of minimizing the harm that I cause, I'd save the adult chimpanzee.

Link to comment
Share on other sites

I hope you dont take this as shifting the burden of moral discourse, but if you dont think preserving a beings experiential welfare is a valid moral principle, whats your alternative?

It's one moral principle, but there are others for me, eg. aesthetics. I didn't like to see the golden mosque destroyed because it was a beautiful building.

Link to comment
Share on other sites

Please forgive me, but I think the point of everything you just said passed me by

 

Essentially, I'm wondering why a simple and functional system that occaisional throws an error due to logical contradictions is inferior to a system free from such errors but requiring vastly more time and mental resources to create and maintain. Is the freedom from rare errors worth the extensive effort?

 

It's an evolutionary position; sometimes 'good enough' really is good enough.

 

I hope you dont take this as shifting the burden of moral discourse, but if you dont think preserving a beings experiential welfare is a valid moral principle, whats your alternative?

 

I'm not so much disagreeing and pointing out that it's based on an assumption rather than fact, and that it is conceivable that there is a viable alternative (though I don't have one).

 

But in any case, the whole point with fleeting pleasures, in a utilitarian sense, is that a temporary pleasure doesnt maximize the amount of good as longer lasting pleasures.

 

But it can if it's intense enough. From the graphs you showed of pleasure over time, the total pleasure is the area under the curve. However, a very tall spike with a short duration can have the same area as a very long but flat curve. Imagine the pleasure is either at level 10 for 1 minute or level 1 for ten minutes. The net pleasure is the same.

 

Another aspect that comes to mind: Say we have two beings who are equivalent in the mental/moral attributes of relevant; a retarded kid and a dog, both 5 years old. Your position would mean that the kid is worth more than the dog, because the dog will only have about 10 more years of life, while the kid, barring any associate medical issues, will have 70 more years.

By bringing in time, you make organismal lifespan a morally relevant variable.

 

Each person suffers the harms and benefits of only one person, so its rational to consider the harms and benefits of our actions an individual per individual basis. So it doesnt matter that one death decreases the suffering of many, because that single death must be compared against the benefit of each single individual. Philosophers Tom Regan, explains this kind of comparison and the reasons for it a lot of depth (if you're interesting, you can pick up the book "The Case for Animal Rights" and give it a read, and see how he argues it). We dont aggregate the benefit of all the individuals affected, but compare them as individuals, so we're still effectively making a 1:1 comparison; there are reasons to prefer this kind of moral reasoning, such as to avoid the akwardness of saying "its ok to murder one person if their organs will save 6 others".

 

I'm sorry, but I don't buy that in the least. In fact, to me, that seems like just plain bullshit. If someone saves one person's life or 10000 people's lives, they've done the same thing? No.

 

You cannot simply examine things in isolation; the real world is not just isolated systems, but interacting systems. If the goal is to increase welfare of all, you have to examine the agregate masses, otherwise you do stupid things like *not* sacrificing one person when it means the survival of all.

 

Personally, I don't think there's a rational way that one can argue that aggregate good is irrelevant, and furthermore, I think the individual in question merely slepped together this position because without it his entire system crumbles.

 

The same principles apply to the scenario you defined, but it also entails a number of other interesting moral questions: if its ok to destroy a life to save another, would you say that murder is justified so long as another life is brought into existence immediately afterward?

 

I wouldn't even rely on 'immediately', or bringing about another life. If someone's trying to kill me, I feel totally justified in killing them in self-defense. If I need someone's heart to live, it's neutral, a 1:1 exchange with no net moral value. But if that person is lesser in a morally relevant way (say a brain-dead individual kept alive only by machines), then yes, totally justified to extend my life.

 

Of course, you also have to take into consideration, when you kill a being to save others, you deliberately harm it and do it moral wrong; but when you try to save a person, without killing others, you do no one anymore moral wrong at all, even if your patient doesnt survive.

 

...unless you neglect possible cure by avoiding harm, and therefore let the patient die by your negligence. To me, that's the same. If you kill to preserve a life, no net imbalance. If you avoid killing, but then the patient dies and you could have saved them by killing, you've deliberately killed the patient by refusing a potential treatment while saving the other, and therefore it's back to null again.

 

From the point of view of minimizing the harm we cause, we arent justified in murdering others to for anyones benefit, so that kind of practice is morally wrong. And taken with consideration of the "individual per individual" calculation above, and the fact that murder is not justified by bringing others into existence, then we have a strong cumulative argument for the categorical abolition of all killing for the benefit of others.

 

I strongly disagree. I feel the individual-by-individual arguement is specious and constructed solely to direct the reasoning to a pre-determined conclusion, since I see no logical reason why it should be the case. Additionally, I feel like letting someone die because you refuse to kill is killing in and of itself. If you see someone about to be murdered, and your only option to save them is to kill the murderer, you are morally obligated to kill, IMHO.

 

I value the chimpanzee just as much as I'd value my own children, but the chimpanzee has more morally relevant characteristics to take into consideration, and in the interest of minimizing the harm that I cause, I'd save the adult chimpanzee.

 

But would you, or would you act on instinct and save the human child? A logical moral system that cannot be executed because the instincts overrule it is useless.

 

Mokele

Link to comment
Share on other sites

Snail said:

So would anyone eat artificially produced brain dead infants if they tasted of lightly seasoned gammon steak...or would the stigma of them being 'human shaped' prove too much

 

I would. As long as I have mine medium rare. I want to taste the youth! Grilled sauteed with EVO, garlic, onions, and an splash of Merlot.

Save the eyes for last.

 

Get in my BELLY!

Link to comment
Share on other sites

IMM, pleaase forgive me if I'm reading you wrong. (I've just been discharged from hospital and may not be quite functioning yet.)

 

Your above posts seem to say that if there was someone about to release a toxin that would take out say 20% of humanity it would be wrong for me to put a bullet into him to save those people. Or a simpler example, you and your family are lined up by a psycho and are about to be shot, one at a time. I'm also in the room and armed, is it your argument that it is wrong for me to drop him until after he has killed you and your family and is about to turn his weapon on me? Or would you be supremely happy that I chose to decorate the wall with him?

 

And that you would save an adult chimp over your own child?

 

I find both of these stances to be anti survival. The first relies on the existence of people who don't believe the idea to protect (ensure the survival of) those who do, otherwise they would all get killed off. The second would ultimately result in a lack of offspring and therefore is not a survival trait.

Link to comment
Share on other sites

Mokele,

Essentially' date=' I'm wondering why a simple and functional system that occaisional throws an error due to logical contradictions is inferior to a system free from such errors but requiring vastly more time and mental resources to create and maintain. Is the freedom from rare errors worth the extensive effort?

 

It's an evolutionary position; sometimes 'good enough' really is good enough.[/quote']

In other words, you're saying sometimes peoples moral beliefs contradict, but its ok to bite the bullet every now and then and move on (note that this is a very distinct claim from the statement that moral rules dont have to be logically consistent at all).

 

Generally, I think of morality in the same way I think of science: we cant ever know with perfect accuracy what physical laws are etched into the fabric of the universe, but everyday our own models make closer and closer approximations; likewise, we cant ever know what moral rules are etched into the fabric of the moral universe, but everyday our models make closer and closer approximations. Moral progress is a good thing, so long as we keep progressing.

 

But it can if it's intense enough. From the graphs you showed of pleasure over time, the total pleasure is the area under the curve. However, a very tall spike with a short duration can have the same area as a very long but flat curve. Imagine the pleasure is either at level 10 for 1 minute or level 1 for ten minutes. The net pleasure is the same.

Yes, thats true. But then again, we also have different flavors of utilitarianism, such as totalizing utilitarianism (which argues that if pleasures of any intensity and duration cover the same area, they are morally equal) and averaging utilitarianism (which argues that pleasures can differ if they cover the same area in a duration but have different average of pleasures at any give time).

 

Both kinds of utilitarianism have their advantages and disadvantages. For instance totalizing utilitarianism seems to imply that its morally better to house a huge population in barely livable conditions than housing a small population in decent conditions (a repugnant conclusion); while averaging utilitarianism only takes into account changes in the average happiness but not the total happiness, so that killing off the unhappiest 50% of the population is morally obligatory (another repugnant conclusion).

 

Maybe we can blend both of these conclusions by calculating moral goodness with true bayseian estimates, so that increasing the total amount of happiness and average happiness simultaneously is good. It would be interesting to see that kind of approach advanced.

 

Another aspect that comes to mind: Say we have two beings who are equivalent in the mental/moral attributes of relevant; a retarded kid and a dog' date=' both 5 years old. Your position would mean that the kid is worth more than the dog, because the dog will only have about 10 more years of life, while the kid, barring any associate medical issues, will have 70 more years.

By bringing in time, you make organismal lifespan a morally relevant variable.[/quote']

Yes, thats true, lifespan and expectations can make a difference. Sometimes people ask me, "how many dog lives are equal to one person", and I usually reply "five or six". But it also automatically implies that a person with a terminal illness has less claim to moral value than any other person with good health, which is a little hard for people to morally digest.

 

Although, sometimes I wonder if lifespan really does make a difference. For instance, lets say someone has an abortion; is that person guilty of destroying 70 years of life, or is that person guilty of nothing because the fetus has no morally relevant characteristics to take into consideration?

 

And sometimes I wonder if its really true that two beings with different lifespans and expectations have different claims to moral value. For instance, most of the time, I think of the value of creatures as the sum of all of their morally relevant characteristics, so that a creature with the capacity to feel pain and nothing else has (strictly speaking) less claim to moral value than a being with a capacity to feel pain, seek goals, and practice moral reciprocity. But, then I consider that coherent arguments can be made so that creatures dont have graduated moral value, but they have inherent moral value that either exists or doesnt*; this kind of interpretation would mean that all creatures with inherent moral value have the same worth, no matter what differences they have in capacity.

 

* Now, before you call that to be an ad hoc system, I'd remind you that the "inherent value" idea is literally embodied in familiar phrases like "all men are created equal" and it has a huge amount of support in moral philosophy, based on some of the recent Kantian and Rawlsian revivals over the past 30 years. John Rawls for instance says that one could imagine drawing a circle, where everyone inside of the circle has moral value equal and everyone outside of the circle has none (imagine me waving my hands furiously as I say that, because really his argument is hugely more complicated that that). Tom Regan states that having a mental life that entails at least the capacity to have pleasurable and sufferable experiences, have desires, and pursue goals is inherently valuable, and all beings with those capacity have the same inherent value. I usually find a lot of my reasoning is some mix of utlitarianism and deontology, and sometimes I wonder if experiential welfare is properly thought of as inherently valuable, so that all creatures with an experiential welfare have the same moral value no matter what other differences they have.

 

I'm sorry' date=' but I don't buy that in the least. In fact, to me, that seems like just plain bullshit. If someone saves one person's life or 10000 people's lives, they've done the same thing? No.

 

You cannot simply examine things in isolation; the real world is not just isolated systems, but interacting systems. If the goal is to increase welfare of all, you have to examine the agregate masses, otherwise you do stupid things like *not* sacrificing one person when it means the survival of all.

 

Personally, I don't think there's a rational way that one can argue that aggregate good is irrelevant, and furthermore, [b']I think the individual in question merely slepped together this position because without it his entire system crumbles.[/b]

To the part in bold, that is a pretty harsh judgement of a philosopher whose works you've never even read (you have no idea how much it makes my head spin to read that, and in fact I think it embodies the exact same frustrated naivety --- and I use that word academically, not as an insult :) -- of creationists who say "Darwin just tossed together his theory of evolution because he didnt want to believe in Jesus"). If you're interested to see how the individual-per-individual calculations are laid out, I'd urge you to read what Regan actually has to say:

 

- Short article examining utilitarianism by Tom Regan

- The Dog in the Lifeboat, an exchange between deontologist philosopher Tom Regan and utilitarian philosopher Peter Singer. I find Regans comments plausibly defend the idea that actions are weighted on an individual per individual basis, and Singers comments miss the point a little.

 

...unless you neglect possible cure by avoiding harm, and therefore let the patient die by your negligence. To me, that's the same. If you kill to preserve a life, no net imbalance. If you avoid killing, but then the patient dies and you could have saved them by killing, you've deliberately killed the patient by refusing a potential treatment while saving the other, and therefore it's back to null again.

In other words, you're guilty of multiple murder for not abducting people off the street and harvesting their organs and blood to save others. Instead of being guilty of one murder, you're guilty of hundreds because so few could have been sacrificed for the needs of many others!

 

Of course, if you're like me, you'll probably reject the claim that you're guilty of hundreds of murders because, after all, you dont have any recollection of actually murdering anyone do you? Theres a good reason for this: because you're introducing the question of whether we are just as responsible for the lives we take as the lives we fail to save. Generally, the consensus among academics is that we're more responsible for the lives we take, because it takes considerably more effort to save someone than to refrain from killing them; this is one of the reasons why failing to donate food to starving Africans overseas is not the moral equivalent to sending poisoned food. If you're interested to see this argument fleshed out in more detail, then I recommend any books by Helga Kuhse or Mary Midgley on euthanasia.

 

The principles above indicate that we arent even talking about a zero sum game anymore, because with the implications that its wrong to cut up one person to save 6 others, he have grounds for saying one persons murder is worse than at least 6 peoples unintended deaths, so that killing people is worse than not saving them. In this way, killing one person to save another is always morally worse than failing to kill someone which leads to anothers death. You might be able to get away with saying that, so long if one persons death is sufficiently beneficial, such as killing one infant to instantly cure AIDS; that might be logically consistent, but those kinds of examples are extremely rare, if existent at all, in the real world.

 

However, your situation also involves some more complex considerations: on the one hand, if a mother withheld food from their child, and her actions lead to the death of the child, then I would agree that mother is guilty of murder. But for a doctor that fails to kill another to save another, is he really guilty of murder? I dont really think that case applies, because the harm of killing a person is a rational constraint on the way the doctor can treat his patient (especially when taken in consideration of the fact that taking lives is morally worse than failing to save them).

 

I strongly disagree. I feel the individual-by-individual arguement is specious and constructed solely to direct the reasoning to a pre-determined conclusion, since I see no logical reason why it should be the case.

At best, your stating your beliefs without really justifying them. The individual-per-individual way of calculating harms isnt specious in the least, because its a variant of deontological ethical systems which have a pretty weighty influence in moral philosophy.

 

Lets say Bob feels harmed because you wont give him 20 dollars, and lets say Bob is one of 100 people who want you to give him 20 dollars; Bob and each individual is only harmed -20 dollars by your refusal, but you are harmed 2000 dollars for complying; is it really the case that you are harmed less for having your money taken, than Bob is harmed for not being able to take your money? No, that would be nonsense, because its plainly evident that Bobs individual harm is less than your individual harm just by calculating the net gains and losses. Is it true that any individual is harmed to a greater extent than yourself? No. Yet even when everyones net gain is substantially less than your net harm, your argument implies you harm all of them profoundly for not giving away your money, more profoundly than they harm you for taking it. Its enough to make your head spin.

 

Additionally, I feel like letting someone die because you refuse to kill is killing in and of itself. If you see someone about to be murdered, and your only option to save them is to kill the murderer, you are morally obligated to kill, IMHO.

I agree with your conclusions, but the ways that you got there are incoherent, because they imply that, if you dont donate all of the blood you possibly can, dont max out all of your credit cards and sell your home to donate money overseas, and dont donate all of your expendable organs, and dont harvest peoples organs every day, then you're a multiple mass murderer. You can come to the same conclusion by shifting the discussion from killing vs. letting die, to something more subtle like self-defense on behalf of others, social contract theories, or utilitarian angles:

 

- On the self-defense angle, you can say Person A has a right to defend herself from harm, but if for some reason Person A is incapacitated, then its morally acceptable for Person B to defend Person A's life on Person A's behalf.

 

- On the social contract angle, you can say that person who kills violates the social contract (presumably if the social contract prohibits killing others), so that the killers contractarian protections are forfeited, so the killer no longer has a right to life.

 

- On the utililtarian angle, you can say that a killer causes more harm than a non-killer, so that the killers continued existence is profoundly more harmful than the victims continued existence, so that killing the killer ultimately minimizes the harm than if the victim were allowed to be killed.

 

From a number of different moral stances, we can rationally say that killing murderers is less immoral than allowing murderers to kill. (If it matters at all, I most strongly agree with the first and third approaches.)

 

But would you, or would you act on instinct and save the human child? A logical moral system that cannot be executed because the instincts overrule it is useless.

I dont know why you think I was insincere with my first answer.

 

But in any case, I think thread is about ready to go in about 50 million directions at once, so I'll make this my last post and let you have the last word :)

 

Oh by the way, I've purchased a membership to PETA today :) I was reluctant to do so for a long time, but they do a lot of a good work which I support. w00t!

 

 

JohnB,

Your above posts seem to say that if there was someone about to release a toxin that would take out say 20% of humanity it would be wrong for me to put a bullet into him to save those people. Or a simpler example' date=' you and your family are lined up by a psycho and are about to be shot, one at a time. I'm also in the room and armed, is it your argument that it is wrong for me to drop him until after he has killed you and your family and is about to turn his weapon on me? Or would you be supremely happy that I chose to decorate the wall with him?

 

And that you would save an adult chimp over your own child?

 

I find both of these stances to be anti survival. The first relies on the existence of people who don't believe the idea to protect (ensure the survival of) those who do, otherwise they would all get killed off. The second would ultimately result in a lack of offspring and therefore is not a survival trait.[/quote']

Its perfectly fine for people to defend themselves, or for you to defend others on their behalf. I'm not sure what I've said that makes you think otherwise.

 

And I'm not really sure why you think anything I've said is anti-survival, or even why you think anti-survival is even morally wrong (taken at face value, your comment seems to imply that its morally wrong for people to be altruistic, that martyrdom and dying for your country is morally wrong, and that someone who dies for another is evil --- you might as well be saying "I hate Jesus for dying for my sins" ;) ). I am both a strict vegan and die hard humanitarian who boycotts sweatshop products, I'm probably the least anti-survival person you'll ever meet in your life.

Link to comment
Share on other sites

In other words, you're saying sometimes peoples moral beliefs contradict, but its ok to bite the bullet every now and then and move on (note that this is a very distinct claim from the statement that moral rules dont have to be logically consistent at all).

 

Pretty much, yeah. And while this doesn't invalidate logical consistency in moral systems, it does point out that use of potential contradictions to exclude certain moral systems as incorrect is an arbitrary distinction with may do more harm that good by ignoring solutions which are, overall, superior in spite of their errors.

 

Generally, I think of morality in the same way I think of science: we cant ever know with perfect accuracy what physical laws are etched into the fabric of the universe, but everyday our own models make closer and closer approximations; likewise, we cant ever know what moral rules are etched into the fabric of the moral universe, but everyday our models make closer and closer approximations. Moral progress is a good thing, so long as we keep progressing.

 

I don't think that's a very good analogy at all, primarily because science relies on tests, and the assumptions can be tested. You can start with a thousand different sets of assumptions about morality, and go in a million directions, and you can never, ever know if any are right because you have no way of verifying, no objective test where the little light is green if it's good and red if it's bad.

 

From my POV, morality is like String Theory. You find some starting point, and begin constructing a logical framework. Progressively, you hone this framework, solving contradictions and addressing new problems. Then you realize that your theory is untestable without a particle accelerator the size of a galaxy, everyone loses interest, the publications trail off, and the theory gets a space on the shelf next of Aether and Humours.

 

Yes, thats true. But then again, we also have different flavors of utilitarianism, such as totalizing utilitarianism (which argues that if pleasures of any intensity and duration cover the same area, they are morally equal) and averaging utilitarianism (which argues that pleasures can differ if they cover the same area in a duration but have different average of pleasures at any give time). Both kinds of utilitarianism have their advantages and disadvantages.

 

And this is precisely what I'm talking about. Here you have two totally reasonable logical systems. In science, this is where you do a test of the real world, to find out which agrees with reality, but in philosophy, you *can't* test anything. So you either have to deal with both for the rest of eternity, or arbitrarily exclude one based on how scenarios 'make you feel'.

 

Someone far wiser than me once said something along the lines of: "Science has flourished while philosophy has stagnated because science chose to focus on questions which can be answered."

 

But it also automatically implies that a person with a terminal illness has less claim to moral value than any other person with good health, which is a little hard for people to morally digest

 

But that doesn't make it wrong.

 

Although, sometimes I wonder if lifespan really does make a difference. For instance, lets say someone has an abortion; is that person guilty of destroying 70 years of life, or is that person guilty of nothing because the fetus has no morally relevant characteristics to take into consideration?

 

Which runs us into the question of whether potential is a morally valid concept. (I seem to recall you've spoken on that before)

 

Actually, that brings up something else: if it's wrong to kill an insect, is it wrong to have an abortion once the fetus reaches the same level of neural development?

 

What about parasites? Under the concepts elucidated in your prior posts, do I have the right to rid myself of a tapeworm infestation? After all, I'm killing millions in exchange for nothing but my own comfort. What of a parasite with more developed mental abilities, such as a lamprey? Do I have the right to swat at a vampire bat if it tries to drink from me?

 

For instance, most of the time, I think of the value of creatures as the sum of all of their morally relevant characteristics, so that a creature with the capacity to feel pain and nothing else has (strictly speaking) less claim to moral value than a being with a capacity to feel pain, seek goals, and practice moral reciprocity. But, then I consider that coherent arguments can be made so that creatures dont have graduated moral value, but they have inherent moral value that either exists or doesnt*; this kind of interpretation would mean that all creatures with inherent moral value have the same worth, no matter what differences they have in capacity.

 

Again, what I'm talking about and why I consider most philosophy a waste of time. Two different, rational views which are mutually exclusive, and this cannot be resolved because the system cannot be tested.

 

Have the crowd that thinks creatures have it or don't addressed the issue of what happens when a creature evolves between these two states? Isn't clearly happened before, else there would be no such dichotomy, and it's pretty hard to look at a huge, gradually changing population and pick out which individuals have moral worth, or what genes confer it.

 

I'd remind you that the "inherent value" idea is literally embodied in familiar phrases like "all men are created equal" and it has a huge amount of support in moral philosophy, based on some of the recent Kantian and Rawlsian revivals over the past 30 years.

 

Aether was good physics until a couple of jokers with a laser proved it didn't exist. Tradition means nothing.

 

Tom Regan states that having a mental life that entails at least the capacity to have pleasurable and sufferable experiences, have desires, and pursue goals is inherently valuable, and all beings with those capacity have the same inherent value.

 

Why those criteria, though? Why do the goals need 'inherent' value? Is simple reproduction valuable enough? If not, congrats, you've just made a case for the lack of moral worth of 98% of the human species (really, can you tell me a car salesman has goals that are 'inherently valuable'?).

 

I usually find a lot of my reasoning is some mix of utlitarianism and deontology, and sometimes I wonder if experiential welfare is properly thought of as inherently valuable, so that all creatures with an experiential welfare have the same moral value no matter what other differences they have.

 

But anything experiential is based on sensation, as without sensation, the only experiences one can have are imaginatory or hallucinations. Even those can only be experienced if the brain has the appropriate bits. And all brains are different. Does a mantis shrimp have greater richness of experience than you or I because it can see into spectra we can't (and has trinocular vision in each eye). This brings gradation back into it.

 

Also, what of species which has free-living and parasitic stages, such as parasitoid wasps or male anglerfish? Their abilities and experiential life change dramatically through their life cycles, so is their moral worth similarly variable?

 

If you're interested to see how the individual-per-individual calculations are laid out, I'd urge you to read what Regan actually has to say:

 

The first link: I'm terribly unimpressed. At one point he states that "[example] can be repeated in all sorts of cases, illustrating, time after time, how the utilitarian's position leads to results that impartial people find morally callous." I'm sorry, what? I thought philosophy was supposed to be about reasoning? If I want to know what people feel, I'll ask a psychologist. Besides, how do we know those feelings aren't erroneous, or are an accurate predictor of 'morality'. He tries to mimic science by testing, but resorts only to a cheap cop-out that tells us nothing. Later on, when attempting to craft an alternative, he immediately says that it cannot, of course, lead to certain conclusions (discrimination, slavery, etc), because we have decided those are bad, again appealing to mere feelings to give the illusion of testability. He endorses his position based only the exclusion of a handful of systems based on supposed flaws (see my prior arguement that errors are not necessarily indicative of a flawed moral system), and takes that as evidence that his is right. As H.L.Mencken once said "Just because I have no remedy for all the troubles in the world does not mean I must accept yours."

 

The second article starts off better. The first error I noticed was this: "In the case of the harmful use of animals in science, animals are coercively placed at risk of harm, risks they would not otherwise run". I laughed so hard I almost puked. Someone needs to buy this guy a subscription to the nature channel and cure him of the delusion that an animal's life is somehow perfect and happy if humans aren't in the picture.

 

Later in the article, he states:

Suffice it to say that no one has a right to have his lesser harm count for more than the greater harm of another. Thus, if death would be a lesser harm for the dog than it would be for any of the human survivors — (and this is an assumption Singer does not dispute)—then the dog's right not to be harmed would not be violated if he were cast overboard. In these perilous circumstances, assuming that no one's right to be treated with respect has been part of their creation, the dog's individual right not to be harmed must be weighed equitably against the same right of each of the individual human survivors. To weigh these rights in this fashion is not to violate anyone's right to be treated with respect; just the opposite is true, which is why numbers make no difference in such a case. Given, that is, that what we must do is weigh the harm faced by any one individual against the harm faced by each other individual, on an individual, not a group or collective basis, it then makes no difference how many individuals will each suffer a lesser, or who will each suffer a greater, harm.

 

Now, I'm sorry, but this does *not* follow logically to me. If we weigh all harms equally, I can see how this does not violate the right to be treated with respect. But I cannot see *where* he gets the idea that this somehow must only apply to an individual basis. The individual weighing of harms no more precludes assessing the results in agregate than the individual weighing of the nuts in a bag precludes me adding them up to determine the total weight of nuts in the bag. Additionally, his proposal also leads to one of those 'morally repugnant' conclusions he uses: I would not be morally justified in killing someone who would otherwise rape thousands of women and children. He repeatedly appeals to feelings in the first article, thus he cannot deny this without invalidating much of his own position.

 

Since Regan says that in these cases numbers do not count, and a million dogs should be thrown overboard in order to save a single human being, he would have to say that it would be better to perform the experiment on a million dogs than to perform it on a single human. Here we can see the extraordinary consequences of the refusal to take notice of numbers:

 

For once, I agree with Singer (though his 'facts', such as the uncertain benefits of animal testing, are laughable).

 

In other words, you're guilty of multiple murder for not abducting people off the street and harvesting their organs and blood to save others. Instead of being guilty of one murder, you're guilty of hundreds because so few could have been sacrificed for the needs of many others!

 

Not a compelling arguement for someone with aspirations towards a career in super-villainy. ;)

 

Less faceitiously, how do you know I'm not guilty as above (you as well, unless you have been harvesting organs, in which case I've got a list of requests...)? Sure, it sucks, but how do you know that's not the awful truth of morality? We can't know, because we can't test. And we can't reject it because we don't like it; that's arbitrary and possibly erroneous.

 

Generally, the consensus among academics is that we're more responsible for the lives we take, because it takes considerably more effort to save someone than to refrain from killing them

 

Tell that to someone who's worked retail. :D

 

Seriously, when did responsibility scale to effort? I'm just as responsible for breaking the cookie jar whether it took me months of planning or I just felt like dropping it on a whim.

 

In this way, killing one person to save another is always morally worse than failing to kill someone which leads to anothers death.

 

So I have no right to self-defense? And you wonder why I don't respect philosophy as a field?

 

But for a doctor that fails to kill another to save another, is he really guilty of murder? I dont really think that case applies, because the harm of killing a person is a rational constraint on the way the doctor can treat his patient (especially when taken in consideration of the fact that taking lives is morally worse than failing to save them)

 

But what if killing that one patient saves 100? As I've already stated above, the arguement that all comparisons ate 1:1 seems just as ridiculous and logically flawed to me now as it did before I read those papers, if not more so, because now I see the poor assumptions and shoddy logic used to reach that conclusion.

 

The individual-per-individual way of calculating harms isnt specious in the least, because its a variant of deontological ethical systems which have a pretty weighty influence in moral philosophy.

 

An Aether had a pretty weighty influence on early physics. It was still wrong.

 

I rejected the position because it saw no logical reason for it; having read those links, that remains the case.

 

Yet even when everyones net gain is substantially less than your net harm, your argument implies you harm all of them profoundly for not giving away your money, more profoundly than they harm you for taking it. Its enough to make your head spin.

 

When did I say that? I'd say I harm each of them precisely 20 dollars, and that since that precisely equals the amount I'd be harmed, there's no net moral exchange, and I get to resort to my default selfishness.

 

because they imply that, if you dont donate all of the blood you possibly can, dont max out all of your credit cards and sell your home to donate money overseas, and dont donate all of your expendable organs, and dont harvest peoples organs every day, then you're a multiple mass murderer

 

Unless I reject the notion that things scale linearly, or in any discernable manner, in morality. Ignoring the cliche of 2 wrongs and a right, how do we *know* that one wrong of X moral units, followed by another, adds to 2X moral units, rather than, say, 1.7 moral units. Even if we assume morality is something more than a human-imposed illusion to make ourselves feel good while justifying our behavior, there are lots of systems in nature that don't scale linearly. Maybe there's a point of diminishing returns, after which additional wrongs don't matter? Or the reverse, that a few wrongs mean almost nothing while a lot mean exponentially more? Once again, because we have nothing empirical, we cannot even address this basic question that haunts so many moral examples.

 

I dont know why you think I was insincere with my first answer.

 

I didn't mean it that way, I merely meant that while personal convictions can be powerful, so can instincts and hormone-controled responses.

 

But in any case, I think thread is about ready to go in about 50 million directions at once, so I'll make this my last post and let you have the last word

 

Well, it has become exceptionally time consuming, but I've been enjoying it, so thanks for a debate that's made me think!

 

Mokele

Link to comment
Share on other sites

Mokele

 

Again, what I'm talking about and why I consider most philosophy a waste of time. Two different, rational views which are mutually exclusive, and this cannot be resolved because the system cannot be tested.

 

All nicely argued. Just picked this one quote. Rather glad you included a qualifying "most" in it.

 

Sometimes I pick and choose a philosopher according to my mood, and sometimes I enjoy having my eyes opened to new ways of looking at things.

 

This, to me, is the wonder and value of philosophy. If I disagree, I can always find a contrary, well argued but apparently contradictory view.

Link to comment
Share on other sites

IMM, I've reread things and please forget I opened my mouth.

 

Class: The lesson for today is "Why we should not try to follow or reply to threads of a deep philosophical nature while under the influence of morphine.":D

Link to comment
Share on other sites

Locally raised buffalo, 20 km distant, is usually available at $5.50/lb. Today, as it occasionally is, it was $3.79 and I hooted and bought 3 lb. and am about to eat my soon-to-be-famous buffalo-jalapeno meatloaf. I have raised hogs on compost and goat's milk and watched the guy shoot them. A few times a week, I need meat. If you are nice I will share my recipes. No, I have not yet read the no doubt erudite head-tripping on this last page. . . . . . . . . . . . . . Now let's all sing along, "Home, home, on Lagrange..."

Link to comment
Share on other sites

Locally raised buffalo, 20 km distant, is usually available at $5.50/lb. Today, as it occasionally is, it was $3.79 and I hooted and bought 3 lb. and am about to eat my soon-to-be-famous buffalo-jalapeno meatloaf. I have raised hogs on compost and goat's milk and watched the guy shoot them. A few times a week, I need meat. If you are nice I will share my recipes. No, I have not yet read the no doubt erudite head-tripping on this last page. . . . . . . . . . . . . . Now let's all sing along, "Home, home, on Lagrange..."

 

That's cheaper than trying to "grow" meat. No one here has mentioned the cost!!!!!

 

Geez, like 40% of my lab budget use to go on tissue culture (thats "growing" cells for the non bio knowledge people) alone. So by the time I got like 10 cm plate of 100% confluency (about 10 million cells..or about a 0.1 mL volume of cells in a 15mL falcon tube) the total cost was probably around $10, just for that one plate!!. This includes the cost of FBS (fetal bovine serum), media, antibiotics (to prevent bacterial and some cases fungal growth), trypsin (to split cells), and PBS. For one plate you have change the media at least 2 or 3 times to get healthy cells. Then there is the cost of the material, such as the incubator, tissue culture dishes, CO2, deionized H20, tubes, and the cost of disposing biological waste.

 

The point is the "growing" meat is economically not feesible..now. $10 for 0.5g of cells? hmm...imagine what that burger from cultured cells would cost ya!

 

Forget about scale up productions! That would be even more expensive, when you bring in not only the cost of growth material, but also the incubators, pumps, machinary to change the media and collect cells, centrifuges.

Link to comment
Share on other sites

Look out, I am snorting today! I like this sign on a neighbor's porch. I am blessedly child-free, and was a step-dad for more than a decade. I once said to YT after a lively argument, that he should come spend some time out on the farm sometime, and now I repeat this to all of you. Once a deer doe died right at my fence, having been shot once. A friend knew how to butcher; I gave thanks and we did it. I was quite at peace, though it is alarming when you have not been forced to integrate the pieces of your personality in this way before. I live in foothills land excellent for small farms, and anyone here with a pasture and some cows will look at you quizzically if you question their mode of efficiency. Sunlight, water, some space growing grass which we'd like to up-convert. Now don't judge me quickly, as I was influenced hugely by the "Diet for a Small Planet" and was thrilled to be able to tell a 10-yr. old friend doing gymnastics that if she was going to eat small (she is thin) she needs to educate herself as to protein facts and the whole discussion. I love my tofu, whole grains and fresh veggies, and my meatloaf is to die for. If we cannot live in reasonable harmony here, there are too many of us here. . . . . . . . To offer something positive, protein produced from something like algae sources seems appetizing, like soy products. I try to include more of such things; Soyrizo is a nice version of chorizo.

Link to comment
Share on other sites

  • 1 month later...
Think about it: all the morally relevant characteristics a being has almost always refer to the beings mental and feeling capacities. For instance' date=' capacity to feel pain and pleasure, capacity to be rational, capacity to empathize with others, capacity to seek long term goals, and so on are direct statements about a beings mental and feeling capacities. As well, almost all moral actions you do to a being have to do with how you affect the being either directly or indirectly, which refers back to a beings mental and feeling experiences (which is lumped under the umbrella term "experiential welfare").

[/quote']

 

Why does a morally relevant characteristic refer to a being's mental and feeling capacities? Why is this worth more to you than its ability to photosynthesise for example? It seems to me that you are anthropomorphising.

 

The fact something is a human doesnt has nothing to do with anything, because species membership is not a moral characteristic.

 

IMM, please provide a definition of a 'moral characteristic' and then present a proof of what constitutes a moral characteristic. Why should we accept your definition?

Link to comment
Share on other sites

I researched and found that only some of the buffalo meat here is local but it is a cooperative involving other Northwest states like Idaho. They have space in grasslands. To me the 'moral' questions are treatment of the animal, use of land and resources, and quick and painless slaughter. I assume that if my head is blasted off I won't long feel it. This is the tough point and it is where we must get in touch with our inner Neanderthal. Like I said, I hear the sprouts scream!

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.