Jump to content
jimmydasaint

Why are we humans and not robots?

Recommended Posts

 

 

And why shouldn't any other computing device be able to "think recursively"?

IMO, if I understand recursion correctly, it is similar to Descarte's cogito ergo sum ( I think therefore I am). This led him to believe that even if he doubted his existence there was someone who did the doubting. In short, it is thinking about the action of thought. This requires human experience of a "self" to which events have occurred about upon which thought could be "built".

If you properly define things like "compassionate" or "genuinely empathic", I can program a computer to show this behaviour. Same for the "freshness" of air, but this would obviously need some sensors to detect e.g. CO2 levels.

 

I can program a computer to (randomnly) avoid interaction if there is a risk of not getting a response (fear of rejection).

 

About psychopaths : what makes you think they are more like robots? They simply have different weighing functions to make decisions. It's not like the rest of us don't weigh our actions.

 

 

On second thought, one could argue that psychopaths are less like robots. After all, most robots are programmed to take the well being of (other) humans into account.

I don't doubt your programing brilliance. However, you are suggesting similarities to human qualities. How does the computer show that it is an entity to which things have happened from the environment and from which it has developed a personality. Compassion can be easily put on for show by people and computers alike but true compassion seems to be developed by character and a conscious choice rather than a copy of a compassionate action. For example, feeding of the hungry would be an act of compassion. A computer could ask people if they are hungry and then feed them, of course, but could it be programmed to feel that the action was rewarding and an act of growth of personality? If you could do that then you should be rewarded with a huge grant and a team of postdocs.

Share this post


Link to post
Share on other sites

You can program 'fear' as a response to an input ( or several ) into your computer, Bender, and every single computer running that program will have exactly the same response.

Yet some humans are afraid of the dark, some aren't.

Some are claustrophobic, some are not.

Some are afraid of heights, some aren't.

 

Do you see where I'm going with this ?

We are not simply 'running a program'.

You could program a fear response, but it's anybody's guess if the robot actually feels any kind of actual fear emotion.

Share this post


Link to post
Share on other sites

I don't doubt your programing brilliance. However, you are suggesting similarities to human qualities. How does the computer show that it is an entity to which things have happened from the environment and from which it has developed a personality. Compassion can be easily put on for show by people and computers alike but true compassion seems to be developed by character and a conscious choice rather than a copy of a compassionate action. For example, feeding of the hungry would be an act of compassion. A computer could ask people if they are hungry and then feed them, of course, but could it be programmed to feel that the action was rewarding and an act of growth of personality? If you could do that then you should be rewarded with a huge grant and a team of postdocs.

I'm not even that great a programmer. ;)

 

Like others, you haven't given any definitions, so I'll have to throw in my own.

Humans, computers and robots all use cost functions to decide which action to take, weighing benefits against cost/risk. I suppose to call an action a "True act of compassion" it needs to have a cost, while the only benefit can be "feeling good about yourself".

 

So I'll give the robot an additional parameter, which I'll call "IAmGreat". Whenever the robot takes an action with no other benefit, the IAmGreat parameter is increased. Then I'll add increasing this parameter as an additional term to the cost function with some weight. There you go.

Share this post


Link to post
Share on other sites

I'm not even that great a programmer. ;)

 

Like others, you haven't given any definitions, so I'll have to throw in my own.

Humans, computers and robots all use cost functions to decide which action to take, weighing benefits against cost/risk. I suppose to call an action a "True act of compassion" it needs to have a cost, while the only benefit can be "feeling good about yourself".

 

So I'll give the robot an additional parameter, which I'll call "IAmGreat". Whenever the robot takes an action with no other benefit, the IAmGreat parameter is increased. Then I'll add increasing this parameter as an additional term to the cost function with some weight. There you go.

OK, I am sure you are being modest, which is a good quality. I will present a definition of compassion, which, IMHO, is a most human/humane quality of what we define as character:

 

compassion
noun
1.
a feeling of deep sympathy and sorrow for another who is stricken by misfortune, accompanied by a strong desire to alleviate the suffering.
verb (used with object)
compassion
/kəmˈpæʃən/
noun
1.
a feeling of distress and pity for the suffering or misfortune of another, often including the desire to alleviate it
Word Origin
C14: from Old French, from Late Latin compassiō fellow feeling, from compatī to suffer with, from Latin com- with + patī to bear, suffer
Collins English Dictionary - Complete & Unabridged 2012 Digital Edition

My main point would be the following- does the computer feel better as a result of a kind action? Do I feel better when I show compassion to someone? Yes, undoubtedly I feel better about myself. Does a computer feel good? I am sure you can replicate feelings by, for example, making a smiley face light up as the computer's "I Am Great" ratio increases but does it feel it has made a difference to the life of someone as we humans do? I don't know.

 

I am trying to understand why humans have not evolved as "survival machines" rather than social beings because I am assuming that the latter needs facets of character which may not be directly selected for by Natural Selection. However, I am willing to concede that aspects of the human composition in the brain such as number of oxytocin receptors or "mirror" neurons could play a part as genetic determinants of social behaviour. Nevertheless, I question why humans are not solely populated by those that are the fittest for survival. I remember being told anecdotally that psychopaths thrive in top professions and are at the top of many industries due to their selfish and survival traits in modern life. It made me wonder why we need human qualities such as empathy or compassion at all in the first place.

 

 

Abstract

A common variant in the oxytocin receptor gene (OXTR), rs53576, has been broadly linked to socially related personality traits and behaviors. However, the pattern of published results is inconsistent. Here, we performed a meta-analysis to comprehensively evaluate the association. The literature was searched for relevant studies and effect sizes between individuals homozygous for the G allele (GG) and individuals with A allele carriers (AA/AG). Specifically, two indices of sociality were evaluated independently: i) general sociality (24 samples, n = 4955), i.e., how an individual responds to other people in general; and ii) close relationships (15 samples, n = 5262), i.e., how an individual responds to individuals with closed connections (parent-child or romantic relationship). We found positive association between the rs53576 polymorphism and general sociality (Cohen’s d = 0.11, p = .02); G allele homozygotes had higher general sociality than the A allele carriers. However, the meta-analyses did not detect significant genetic association between rs53576 and close relationships (Cohen’s d = 0.01, p = .64). In conclusion, genetic variation in the rs53576 influences general sociality, which further implies that it is worthy to systematically examine whether the rs53576 is a valid genetic marker for socially related psychiatric disorders.

http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0131820

 

 

The mechanism by which humans perceive others differs greatly from how humans perceive inanimate
objects. Unlike inanimate objects, humans have the distinct property of being “like me” in the eyes of the
observer. This allows us to use the same systems that process knowledge about self-performed actions,
self-conceived thoughts, and self-experienced emotions to understand actions, thoughts, and emotions in
others. The authors propose that internal simulation mechanisms, such as the mirror neuron system, are
necessary for normal development of recognition, imitation, theory of mind, empathy, and language.
Additionally, the authors suggest that dysfunctional simulation mechanisms may underlie the social and communicative deficits seen in individuals with autism spectrum disorders

https://pdfs.semanticscholar.org/6661/16b9135d9fb9cd3d19c9d594c8b530996226.pdf

Share this post


Link to post
Share on other sites

OK, I am sure you are being modest, which is a good quality. I will present a definition of compassion, which, IMHO, is a most human/humane quality of what we define as character:

My main point would be the following- does the computer feel better as a result of a kind action? Do I feel better when I show compassion to someone? Yes, undoubtedly I feel better about myself. Does a computer feel good? I am sure you can replicate feelings by, for example, making a smiley face light up as the computer's "I Am Great" ratio increases but does it feel it has made a difference to the life of someone as we humans do? I don't know.

 

I am trying to understand why humans have not evolved as "survival machines" rather than social beings because I am assuming that the latter needs facets of character which may not be directly selected for by Natural Selection. However, I am willing to concede that aspects of the human composition in the brain such as number of oxytocin receptors or "mirror" neurons could play a part as genetic determinants of social behaviour. Nevertheless, I question why humans are not solely populated by those that are the fittest for survival. I remember being told anecdotally that psychopaths thrive in top professions and are at the top of many industries due to their selfish and survival traits in modern life. It made me wonder why we need human qualities such as empathy or compassion at all in the first place.

 

http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0131820

 

https://pdfs.semanticscholar.org/6661/16b9135d9fb9cd3d19c9d594c8b530996226.pdf

Of course the computer "feels" that it has made a difference: it increased its IAmGreat parameter. Seriously, our "feeling" might be more complex and involve a lot of neural processes and chemicals, but it has long been established in this thread that if complexity is the only difference, humans not fundamentally different from robots.

 

Your second link demonstrates nicely that humans like to think we are special, because we think differently about humans than about anything else. That does not make us special.

 

About the evolution of social beings: psychopaths cannot form a social group and individual humans would not have survived long. It would have been nigh impossible for a solitary mother to raise and protect children. Social structure was vital for such a squishy ape-like mammal with such a long nurturing period. Psychopaths would either need to display social behaviour or be excluded from the group. So it could be as much as disadvantage as an advantage.

Edited by Bender

Share this post


Link to post
Share on other sites

Of course the computer "feels" that it has made a difference: it increased its IAmGreat parameter. Seriously, our "feeling" might be more complex and involve a lot of neural processes and chemicals, but it has long been established in this thread that if complexity is the only difference, humans not fundamentally different from robots..

 

Cogito ergo sum - I think therefore I am. If I doubted my existence, there is a "someone" to doubt it. You see things Bender. Light hits your eyes and are picked up by rod and cone cells in the eye and passed on to neurons to the brain. Who sees the image in the brain and interprets it?

 

Your second link demonstrates nicely that humans like to think we are special, because we think differently about humans than about anything else. That does not make us special.

 

My aim was to establish the role of genetics in social interactions and I gave to examples where the genetic predisposition was important to the development of social personality characteristics.

 

About the evolution of social beings: psychopaths cannot form a social group and individual humans would not have survived long. It would have been nigh impossible for a solitary mother to raise and protect children. Social structure was vital for such a squishy ape-like mammal with such a long nurturing period. Psychopaths would either need to display social behaviour or be excluded from the group. So it could be as much as disadvantage as an advantage.

 

Give me some abstracts to read please. However, it is important to note that psychopaths may be successful in having no empathy or compassion except for themselves and therefore can imitate human qualities in a more complex but similar way to a computer. I will try to find some evidence of this myself.

Share this post


Link to post
Share on other sites

"Who sees the image in the brain and interprets it?"

The brain does.

Similarly, a computer thinks, therefore it is.

 

"My aim was to establish the role of genetics in social interactions and I gave to examples where the genetic predisposition was important to the development of social personality characteristics."

And by doing so, you highlighted (one of) the reason why humans think they are special.

 

"However, it is important to note that psychopaths may be successful in having no empathy or compassion except for themselves and therefore can imitate human qualities in a more complex but similar way to a computer."

I already gave an argument how psychopaths are actually less like robots or computers. Computers are generally not selfish and take the well being of others into account.

 

Do I really need to find a reference to proove that a solitary mother is unlikely to raise a child, which is virtually helpless for a decade, in an environment full of predators?

Moreover, since human children die quite often without modern medicine, she would have to feed, raise and protect multiple children simultaneously.

Edited by Bender

Share this post


Link to post
Share on other sites

Computers are generally not selfish and take the well being of others into account.

Please provide evidence and citations.

I already gave an argument how psychopaths are actually less like robots or computers. (implying it's because psychopaths are selfish.)

 

Since you are implying that psychopaths are selfish, I must state the Train Thought Experiment that proved that psychopaths can be more noble and selfless than entities tainted and deluded by their emotions.

Edited by quickquestion

Share this post


Link to post
Share on other sites

Please provide evidence and citations.

 

Your computer posted this message of yours for you without personal gain. It even presented you the information in the best way it can, all for your convenience. Nearly all computers and robots only serve others, and not themselves.

 

I didn't know about the selfless psychopaths. My argument obviously doesn't hold for those, but neither do they pose a disadvantage for social groups or do they have an advantage due to selfish behaviour, which is what I reacted to.

Edited by Bender

Share this post


Link to post
Share on other sites

Since you are implying that psychopaths are selfish

 

Duh, of course they are :doh:.

 

I must state the Train Thought Experiment that proved that psychopaths sociopaths can be more noble and selfless than entities tainted and deluded by their emotions.

 

 

FTFY, BTW what makes you think a psychopath has no emotion?

Share this post


Link to post
Share on other sites

Your computer posted this message of yours for you without personal gain. It even presented you the information in the best way it can, all for your convenience. Nearly all computers and robots only serve others, and not themselves.

 

I didn't know about the selfless psychopaths. My argument obviously doesn't hold for those, but neither do they pose a disadvantage for social groups or do they have an advantage due to selfish behaviour, which is what I reacted to.

By computers I thought you meant AI. You are ascribing personality qualities to tools and objects, which is ridiculous. Saying a computer is selfless is like saying a hammer is noble. Totally ridiculous.

 

 

 

Duh, of course they are :doh:.

 

 

FTFY, BTW what makes you think a psychopath has no emotion?

First of all, the term psychopath and sociopath imply a kind of Hollywood hysteria like you would see on National Enquirer type magazines. The terms kind of blur into each other.

 

My definition of psychopath is that someone who has low or very little compassion emotions.

Doesn't mean they have no empathy, because they could have other emotions besides compassions. They could also have hunter's empathy which allows them to think and put themselves inside of someone's elses shoes.

 

A compassionate person could have very little empathy, for example someone who circumcizes a baby could feel compassion after doing it, but have very low empathy for what the baby wants, or very low consciousness of what circumcision victims want for themselves and their needs. Kind of like a religious person who feels compassion while sending gays to conversion therapy because they want to save them from hell.

 

Similarly, a nice person could grow up to be a psychopath because they have no compassion for a sick and sad world full of injustice and garbage people.

 

There could be heroic psychopaths who want to help the world despite how evil it is.

Or heroic psychopaths who are ignorant of Earth's evils, and just want to help Earth. Or psychopaths such as Spock and Vulcans, who are psychopaths who try to do good.

Thus psychopaths tend to be more effective at saving people in the Train Experiment because normal people are limited by their emotions, and would rather feel good about themselves than actually save anyone.

Edited by quickquestion

Share this post


Link to post
Share on other sites

By computers I thought you meant AI. You are ascribing personality qualities to tools and objects, which is ridiculous. Saying a computer is selfless is like saying a hammer is noble. Totally ridiculous.

What is the difference between a computer and an AI? It is a gradual transition and every computer has some intelligence. At what point would it stop being ridiculous? Why should we even draw a line?

 

First of all, the term psychopath and sociopath imply a kind of Hollywood hysteria like you would see on National Enquirer type magazines. The terms kind of blur into each other.

 

My definition of psychopath is that someone who has low or very little compassion emotions.

Doesn't mean they have no empathy, because they could have other emotions besides compassions. They could also have hunter's empathy which allows them to think and put themselves inside of someone's elses shoes.

 

A compassionate person could have very little empathy, for example someone who circumcizes a baby could feel compassion after doing it, but have very low empathy for what the baby wants, or very low consciousness of what circumcision victims want for themselves and their needs. Kind of like a religious person who feels compassion while sending gays to conversion therapy because they want to save them from hell.

 

Similarly, a nice person could grow up to be a psychopath because they have no compassion for a sick and sad world full of injustice and garbage people.

 

There could be heroic psychopaths who want to help the world despite how evil it is.

Or heroic psychopaths who are ignorant of Earth's evils, and just want to help Earth. Or psychopaths such as Spock and Vulcans, who are psychopaths who try to do good.

Thus psychopaths tend to be more effective at saving people in the Train Experiment because normal people are limited by their emotions, and would rather feel good about themselves than actually save anyone.

I can't find your definition on Wikipedia, which seems to focus on antisocial behaviour and lack of empathy. Why don't we stick to the common definitions of words?

(Link: https://en.m.wikipedia.org/wiki/Psychopathy )

Share this post


Link to post
Share on other sites

What is the difference between a computer and an AI? It is a gradual transition and every computer has some intelligence. At what point would it stop being ridiculous? Why should we even draw a line?

 

I can't find your definition on Wikipedia, which seems to focus on antisocial behaviour and lack of empathy. Why don't we stick to the common definitions of words?

(Link: https://en.m.wikipedia.org/wiki/Psychopathy )

Their definition is vague and even they say sociopath and psychopath is overlapping. They also say the word has contradicting definitions.

 

Difference between computer and AI is that AI is a subset of computer. The kind of computer he was referring to was a simple tool.

Now an AI is able to make intelligent decisions of it's own, it is not simply a basic tool. Now if an AI became so smart it could make intelligent decisions indiscernable from a human, we still would not know if it was a pzombie or if it had any awareness inside of it.

Edited by quickquestion

Share this post


Link to post
Share on other sites

Their definition is vague and even they say sociopath and psychopath is overlapping. They also say the word has contradicting definitions.

 

Difference between computer and AI is that AI is a subset of computer. The kind of computer he was referring to was a simple tool.

Now an AI is able to make intelligent decisions of it's own, it is not simply a basic tool. Now if an AI became so smart it could make intelligent decisions indiscernable from a human, we still would not know if it was a pzombie or if it had any awareness inside of it.

On the other hand, we technically don't know that about any of our fellow humans, either.

Share this post


Link to post
Share on other sites

Their definition is vague and even they say sociopath and psychopath is overlapping. They also say the word has contradicting definitions.

 

Difference between computer and AI is that AI is a subset of computer. The kind of computer he was referring to was a simple tool.

Now an AI is able to make intelligent decisions of it's own, it is not simply a basic tool. Now if an AI became so smart it could make intelligent decisions indiscernable from a human, we still would not know if it was a pzombie or if it had any awareness inside of it.

You would need to define "intelligent decisions" or "awareness". My computer knows its name, its location and can run selfdiagnostics. Even my toaster is "aware" that it is on or off and can make an "intelligent decision" about when to pop out the toast.

Share this post


Link to post
Share on other sites

You would need to define "intelligent decisions" or "awareness". My computer knows its name, its location and can run selfdiagnostics. Even my toaster is "aware" that it is on or off and can make an "intelligent decision" about when to pop out the toast.

Ok now you are just clowning around on the funny boat.

 

You are changing "definitions" when it is convenient for you.

It's like, if I say a cheerleader is the lead of the parade, you'd say No, because she isn't made out of metal.

Share this post


Link to post
Share on other sites

Which definition did I change?

Definition of aware. Toasters are not aware because the logic circuit isn't complex enough for it to have awareness.

By awareness I don't mean Consciousness (as in not Pzombie) but just any kind of loop/routine that could be aware of itself.

Share this post


Link to post
Share on other sites

Definition of aware. Toasters are not aware because the logic circuit isn't complex enough for it to have awareness.

By awareness I don't mean Consciousness (as in not Pzombie) but just any kind of loop/routine that could be aware of itself.

Ok, now that we have a definition of awareness, the toaster no longer applies (although the definition is still vague enough to squeeze it in if I really wanted, since you didn't define when something is "aware of itself" ;)). My computer still has awareness, though.

 

I finally looked up pzombies. I don't see what makes them different from humans (either that or they are logically contradictory).

Edited by Bender

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.