Jump to content

Science based morality?


john5746

Recommended Posts

So your response is that the technology doesn't yet exist, but what he proposes is do-able? Is that a fair representation of your post, or am I doing you an injustice and misrepresenting?

Link to comment
Share on other sites

I think a few people have said something along these lines before, but I have no problem with science informing moral and policy decisions. The more knowledge you have about a problem, the better, I say. The sticking point for me is responsibility. I just don't think that science should be used as a justification or an excuse for moral decisions; whatever we decide to do, we need to take responsibility for that choice, and should that choice end up hurting some people, we need to be willing to accept the blame. People can be capable of strange and terrible things when they feel like they're just "following orders," and I just wouldn't want that sense to come into play when moral decisions are being made.

Link to comment
Share on other sites

Except that neither you nor Severian have made any sort of attempt to explain exactly why the ideas expounded in the video are 'bollocks', which doesn't reflect very well on your opinion.

 

I get the sense that the challenges to the idea of a science-based/informed morality coming from folks who have not even bothered to view the short video in the OP. Are there specific challenges anyone can make after reviewing the short talk?

I watched the vid before posting.

 

To answer both you, the vid had no particular statements to refute except just its main premise. Well that and I disagree with the overall assumed simplicity of their forecast outcomes.

 

I do agree with Paralith's opening sentences.

 

I think a few people have said something along these lines before, but I have no problem with science
informing
moral and policy decisions. The more knowledge you have about a problem, the better, I say...

Link to comment
Share on other sites

I watched the video. His presentation was unfocused and meandering. I didn't get a strong sense of what he was trying to argue. Anyway, I think the premise itself is flawed.

 

Take homosexuality. What if we find that there is an increased health risk for homosexual behavior (actually we already know that there is). Does this mean that homosexual behavior is wrong?

 

Or another example, we may find that there is some evolutionary reason to be racist. Does that mean that racism is acceptable? (I don't want to get into an argument over the scientific basis of the term "race". It's just an example)

 

I didn't hear him address this simple problem in scientism.

 

Furthermore, I don't understand his equation, happy=moral. That premise itself is a value judgment.

 

Bottom line: Morality is a value judgment. Science can help inform our judgments, but it can't make them for us.

Link to comment
Share on other sites

I watched the video. His presentation was unfocused and meandering. I didn't get a strong sense of what he was trying to argue. Anyway, I think the premise itself is flawed.

 

Take homosexuality. What if we find that there is an increased health risk for homosexual behavior (actually we already know that there is). Does this mean that homosexual behavior is wrong?

 

No, because it is not hurting anyone else. If there are 'increased health risks', then they only affect those engaging in the act to begin with. It is rather like smoking. Laws have been passed (at least here) to keep people from smoking around those who wish not to suffer from the detrimental effects of second-hand smoke, but no law needs to be passed to restrict an individual's right to perform higher-risk behaviours.

 

Or another example, we may find that there is some evolutionary reason to be racist. Does that mean that racism is acceptable? (I don't want to get into an argument over the scientific basis of the term "race". It's just an example)

 

This is a weak example as well, because social darwinism of the type to which you are referring is actually a gross misrepresentation of the theory of evolution.

 

Furthermore, I don't understand his equation, happy=moral. That premise itself is a value judgment.

 

That equation is the basis for utilitarianism. What can maximise the happiness and comfort of the most amount of people is the best action to take.

 

Bottom line: Morality is a value judgment. Science can help inform our judgments, but it can't make them for us.

 

Bottom line: Utilitarianism, you must agree, is the best way to determine objectively what is right and wrong. All the speaker is proposing is that we accept that scientific processes should inform our decisions about what constitutes the maximisation of happiness as opposed to throwing up our arms and surrendering to the notion that morality is completely subjective.

Link to comment
Share on other sites

To achieve the level of comprehension of the brain he's describing we'd have to create strong AI. There's no two ways around it... he's talking about using neuroscience to understand extremely high level behaviors which can't even be comprehended without a comprehensive model of how consciousness itself operates. It's the kind of cognitive science that couldn't take place until you had a complete model of the brain inside a computer to play with....

 

I had the feeling he was making it sound too simple. He mentions decades at one point, which just seems very soon to me. But, since I am ignorant in these areas, I thought I would see what others thought. Thanks for the comments.

 

 

I watched the video. His presentation was unfocused and meandering. I didn't get a strong sense of what he was trying to argue. Anyway, I think the premise itself is flawed.

 

I found it to be too general, that may be why we have such disagreement in this thread.

 

Take homosexuality. What if we find that there is an increased health risk for homosexual behavior (actually we already know that there is). Does this mean that homosexual behavior is wrong?

 

Or another example, we may find that there is some evolutionary reason to be racist. Does that mean that racism is acceptable? (I don't want to get into an argument over the scientific basis of the term "race". It's just an example)

 

I didn't hear him address this simple problem in scientism.

 

Furthermore, I don't understand his equation, happy=moral. That premise itself is a value judgment.

 

Bottom line: Morality is a value judgment. Science can help inform our judgments, but it can't make them for us.

 

You bring up good points, in fact some of the other attendees bring up similar points in later discussions. Individual vs social well-being would be one big hurdle.

Link to comment
Share on other sites

No, because it is not hurting anyone else. If there are 'increased health risks', then they only affect those engaging in the act to begin with. It is rather like smoking. Laws have been passed (at least here) to keep people from smoking around those who wish not to suffer from the detrimental effects of second-hand smoke, but no law needs to be passed to restrict an individual's right to perform higher-risk behaviours.

 

You need to define "hurt" for your argument to work. Physical harm? Reducing the change of reproduction? What is it?

 

For example, in the smoking example, if you smoke 50 a day and die of lung cancer in your 40s you may leave behind a teenager who then has to go into foster care. Your actions have caused psychological hurt and maybe even long term damage. Should that be counted?

 

Irrespective of your opinion, someone will disagree with you, and you will not be able to defend your point of view without using aesthetic arguments. In other words, your attempt to use science to dictate morality fails.

Link to comment
Share on other sites

You need to define "hurt" for your argument to work. Physical harm? Reducing the change of reproduction? What is it?

 

For example, in the smoking example, if you smoke 50 a day and die of lung cancer in your 40s you may leave behind a teenager who then has to go into foster care. Your actions have caused psychological hurt and maybe even long term damage. Should that be counted?

 

Irrespective of your opinion, someone will disagree with you, and you will not be able to defend your point of view without using aesthetic arguments. In other words, your attempt to use science to dictate morality fails.

 

Nope. Because how would you determine whether or not the teenager left in foster care suffers true psychological damage? Carefully constructed observational studies. "Points of view" become hypotheses and science becomes the tool to demonstrate the validity of said hypotheses.

Link to comment
Share on other sites

Irrespective of whether they suffer "true psychological damage" they are still disadvantaged by their parent dieing. You have to draw the line somewhere, and where that line is drawn will always be opinion. In your case, that opinion is in the hands of the psychological observer.

Link to comment
Share on other sites

Irrespective of whether they suffer "true psychological damage" they are still disadvantaged by their parent dieing. You have to draw the line somewhere, and where that line is drawn will always be opinion. In your case, that opinion is in the hands of the psychological observer.

 

And he does not claim that science will be able to point to the 'best' morality, but rather that it can help inform us in the objective general, which is a better system than what is in place.

 

For what it's worth, I do think that smoking is irresponsible, especially if you have children, for the sake of second-hand smoke and also dependency issues in the case of death, and is in fact immoral. In fact, you may be hard pressed to find someone who is basing their morality in empiricism that will say otherwise.

Link to comment
Share on other sites

No, because it is not hurting anyone else. If there are 'increased health risks', then they only affect those engaging in the act to begin with. It is rather like smoking. Laws have been passed (at least here) to keep people from smoking around those who wish not to suffer from the detrimental effects of second-hand smoke, but no law needs to be passed to restrict an individual's right to perform higher-risk behaviours.

 

 

I don't understand your logic. Severian already covered the counter-argument here.

 

Furthermore, you oversimplify human psychology. People often do a thing because of hormonal drive, nature, or addiction, not because they're actively and consciously seeking to increase their own happiness.

 

This is a weak example as well, because social darwinism of the type to which you are referring is actually a gross misrepresentation of the theory of evolution.

 

Actually, it's not. It's now called evolutionary psychology or sociobiology, which is what I'm talking about. Either way, do you understand that people were afraid of "social Darwinism" for the same reasons people are now afraid of evolutionary psychology. We may find that our nature is opposed to our ethics. Or as E.O. Wilson says it "what is is not necessarily what ought to be."

 

This is exactly what is at the core of the argument.

 

That equation is the basis for utilitarianism. What can maximise the happiness and comfort of the most amount of people is the best action to take.

 

But why utilitarianism? Why that one code of ethics over any other? Why is happiness so important to you? It's a value judgment. Many of my students would be much happier if I never gave them tests or homework. Does this mean it's unethical that I give them homework, because I've decreased their happiness and comfort?

 

I don't buy into the equation, that's my point. Happiness is not the ultimate goal in my ethics.

 

Moreover, utilitarianism is a numbers game. A utilitarian fireman would never go into a burning building unless it contained at least two people to save (because a one to one person trade would not be utilitarian)...i.e., how does self-sacrifice play into utilitarianism?

 

These are all really rhetorical questions based on value judgments.


Merged post follows:

Consecutive posts merged
You bring up good points, in fact some of the other attendees bring up similar points in later discussions. Individual vs social well-being would be one big hurdle.

 

Thanks. You're right about individual vs. societal well-being. I didn't notice the second part. If it's there I'll have to watch it.

Edited by MM6
Consecutive posts merged.
Link to comment
Share on other sites

I had the feeling he was making it sound too simple. He mentions decades at one point, which just seems very soon to me. But, since I am ignorant in these areas, I thought I would see what others thought. Thanks for the comments.

 

Vernor Vinge, who popularized the whole "Singulary" concept, predicts strong AI by 2030. Ray Kurzweil, the johnny-come-lately to the party, predicts it by 2045. Sadly flying cars will still be a few years away...

 

One thing I should add: A "science of morality" is one of the more mundane things you could do with strong AI. If you believe the Singularity hubbub the creation of strong AI will be one of the most transformational events in human history. It will certainly transform cognitive science, which is in something of a nascent state at the moment. Cognitive science will finally be able to answer rather abstract questions about human nature scientifically. A "science of morality" is just the tip of the iceberg.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.