Jump to content

Transhumanism: Man vs Machine


ydoaPs

Recommended Posts

Suppose our prosthesis technology advances to the point that any body part can be replaced with an artificial one. If one were to slowly replace organs and limbs until no original organs are left, is the remaining thing(essentially a robot) still that person? Is it even still a person? Should it retain the rights it had as a multicellular organic being?

 

Would it be any different if the brain were replaced in chunks(part by part with each assuming its role prior to the removal of the organic counterpart so that continuity is not disrupted)?

 

I think it would be, as I don't see anything particularly magical about any of our organs.

 

If this thing is not a person, in what part of the process did the transhuman cease to be a person?


Merged post follows:

Consecutive posts merged

Also: were such a scenario possible, would you do it?

Link to comment
Share on other sites

 

I think it would be, as I don't see anything particularly magical about any of our organs.

 

Also: were such a scenario possible, would you do it?

 

 

I think our physical bodies are replaced on the cellular level every 7 years? I see no real difference in this scenario.

 

I would do it, but probably regret it.

Link to comment
Share on other sites

Re the OP, overall I have to agree that it is still the same person, but there are so many specifics to take into account it would be somewhat conditional to the case. I would do it in my old age or if I was suffering from a mental degrading illness.

 

As a side note - what if the brain replacement couldn't quite handle all your previous memory and some of your personality algorithms suffered minor degradation? If you were 99% the same, a good night drinking would probably make you "less yourself" but 95%, 90%... would there be a point where the transhuman you tried to become was nothing more than an over glorified Atari console?

 

Also, what if the copy was great but too good - and they made two of them? They are identical so both would have equal right to your stuff. If it was accidental could you sue the service provider because they just effectively cost you half of what you own?

Link to comment
Share on other sites

If there is continuity, then it's the same person. And it isn't, in the same way that I am not the same person I was ten years ago, but I have to think of myself as the same in order to function and I have to be treated as the same in order for society to function. The idea of sameness is itself a convenient construct more than an objective reality.

 

A person created "from scratch" would be no more or less deserving of rights than "natural" humans, though how we deal with such beings as a society would necessarily have to be different. I would say its creator has a responsibility towards it similar to a parent towards a child.

Link to comment
Share on other sites

Suppose our prosthesis technology advances to the point that any body part can be replaced with an artificial one. If one were to slowly replace organs and limbs until no original organs are left, is the remaining thing(essentially a robot) still that person? Is it even still a person? Should it retain the rights it had as a multicellular organic being?

 

The identity problem you describe here is similar to the Ship of Theseus.

 

My response would be depending on how true the replacements were to the original, that yes, it would be the same person. If you slowly replaced pieces of someone's body with things that did not fulfill the same function, however (imagine replacing their neurons with stone) then no, it would not be the same person.

 

Note that large portions of our body are completely replaced over time as a natural part of the living process.

Link to comment
Share on other sites

Suppose our prosthesis technology advances to the point that any body part can be replaced with an artificial one.

 

If one were to slowly replace organs and limbs until no original organs are left, is the remaining thing(essentially a robot) still that person?

 

Sure. I think we're coming to the point in technology where we will consider people whom were formerly human and now cognitive, thinking androids to be persons.

 

Is it even still a person? Should it retain the rights it had as a multicellular organic being?

 

I think it's still a person. However, I'm thinking such people will have fewer rights, perhaps a lesser ability to own possessions, property, and monetary goods. Otherwise, the person could live for 500 years and collect a large wealth.

 

Would it be any different if the brain were replaced in chunks(part by part with each assuming its role prior to the removal of the organic counterpart so that continuity is not disrupted)?

 

I don't think it would be too different. Nonetheless, I like the idea of there being a cyborg more than an android. Nature has evolved for millions of years to give humans the brain they have, and I suspect keeping a decent, plastic portion of it will give a cyborg the ability to adapt where androids cannot. Keywords: Neural darwinism and epigenetics.

 

Were such a scenario possible, would you do it?

 

I think I wouldn't mind replacing parts of my visual cortex and other hippocampal areas. I think people could start replacing nerves with wires and certain areas with hardware. I am thinking these specific parts of the brain are not TOO drastically important for personhood. I say that in the sense that they simply acting as routing areas for information rather than storage and recall.

 

Then again, I suspect my choice would more than likely depend on anecdotal stories from people whom have undergone the process. If people complain about feeling "outside of their body" and the such, I could consider that these people no longer feel "sentient." As such, maybe they feel as they are no longer "real" people. As such, the "human experience" is gone. These people no longer feel human, and they question their own existence as living beings.

 

I think the transition process from organic brain to hardware would progress in steps. I think that people will ultimately choose to be cyborgs rather than full-fledged androids. That would give them the greater ability to remain human. Still, there may be people whom choose to become full androids.

 

I think the real break point is where a person undergoes the progressive transition, and is constantly asked, "Do you feel like the same person?" If the person can constantly say, "Yes, I feel as though I am the same person," then I'm going to assume that a decent portion of the personhood is still intact.

 

I like to take into consideration various concepts of Buddhism that relate to the transference of personhood. According to various mystical texts, such as the Bardo Thodol, a person should be able to--in a sense--die and yet remain active within a reality in a different form upon active training and transmission of personhood. I don't speak of these concepts too lightly, as these texts bring forth extremely dangerous knowledge which has been hidden for long times. I don't know if it's even ethical to discuss such texts. Nonetheless, these texts were suppose to be found upon when it would be right for them to be found.

 

The conversion process would be a scientific baptism.

 

There is the odd possibility that during transfer of a person to a machine that the "person" on the other side is no longer the same person. I think this is something that's brilliantly touched in Ghost in the Shell: Motoko Kusanagi beings to doubt her own existence and believes the real her died long ago; as such, the memories and personality are hers, but the "existence," the person that was, is no longer there. And she attaches greatly to memories, people, and things before her conversion to an android body (I think she's full android), and greatly fights an internal struggle of deciding whether or not she is truly a real person.

Edited by Genecks
Link to comment
Share on other sites

To me it really depends on how close the copy is to being able to function like a human and relate to others. For example: if a copy ends up being significantly limited in it's cognitive abilities but it still can understand and relate to flesh and bone people and understand what makes them happy or sad, then it can still be considered a person. However if the copy has near perfect cognitive abilities but the only way it understands and relates to other people is to achieve it's own goals and needs then I have a hard time considering it to be more than an animal, and it could be as hazardous as the sharks from Deep Blue Sea or Vicki from I, Robot.

Link to comment
Share on other sites

So how do we reproduce if we all end up as androids?

 

I suspect two full androids wouldn't be undergoing biological reproduction. Perhaps they could have a virtual simulation would what the developmental offspring of a child would be, if both parents has development proteins and gametic DNA preserved before their full conversions. It could be possible that people choose to have clone children, too.

 

Then again, perhaps the full androids wouldn't want to reproduce. Maybe they would rather not have children. Supposedly, with some animals that live a long time, they have a few children if but one child. Given that a full android has the possibility of easily living beyond 100 years, there is the stark possibility that he/she would decide not to have children. Again, this would be quite a controversial point.

Link to comment
Share on other sites

Given the following definitions then, it would be debateable whether or not an android is indeed alive or not, especially in regards to point 7.

 

Since there is no unequivocal definition of life, the current understanding is descriptive, where life is a characteristic of organisms that exhibit all or most of the following phenomena:

 

1) Homeostasis: Regulation of the internal environment to maintain a constant state; for example, electrolyte concentration or sweating to reduce temperature.

 

2)Organization: Being structurally composed of one or more cells, which are the basic units of life.

 

3)Metabolism: Transformation of energy by converting chemicals and energy into cellular components (anabolism) and decomposing organic matter (catabolism). Living things require energy to maintain internal organization (homeostasis) and to produce the other phenomena associated with life.

 

4) Growth: Maintenance of a higher rate of anabolism than catabolism. A growing organism increases in size in all of its parts, rather than simply accumulating matter.

 

5) Adaptation: The ability to change over a period of time in response to the environment. This ability is fundamental to the process of evolution and is determined by the organism's heredity as well as the composition of metabolized substances, and external factors present.

 

6) Response to stimuli: A response can take many forms, from the contraction of a unicellular organism to external chemicals, to complex reactions involving all the senses of multicellular organisms. A response is often expressed by motion, for example, the leaves of a plant turning toward the sun (phototropism) and by chemotaxis.

 

7) Reproduction: The ability to produce new individual organisms, either asexually from a single parent organism, or sexually from two parent organisms.

Link to comment
Share on other sites

Given the following definitions then, it would be debateable whether or not an android is indeed alive or not, especially in regards to point 7...

Response: I do not need to consider the definition of cellular life in order to consider something a person.

Response: The original idea of applying the word "life" to an organism is to fulfill people's subjective and emotional experiences as humans.

 

I consider those criteria archaic and part of religious legacy.

Edited by Genecks
Link to comment
Share on other sites

So how do we reproduce if we all end up as androids?

 

most likely two androids wouls plug into each other and combine their codebase into an entirely new codebase based on the other two. then the codebase would be uploaded into a new body, which may have been built using the combined blueprints of the original two androids.

Link to comment
Share on other sites

Perhaps you do not need those criteria to consider it a person but it will need to be classified somehow. If it is not classified then how would you differentiate it from a sufficiently advanced A.I ?

 

If both were to have sentience and sapience, had the same level of intelligence and infact the only difference is that one was entirely conceived by humans and our the other, our cyborgs, are merely vessels for a naturally occuring mind would they both be people?

 

Will there be a time thousands of years in the future when us, Homosapiens are considered the common anscestor of silicon brained androids and our cyborg offspring?

Link to comment
Share on other sites

[math]person \neq human[/math]

 

Yes, we can have silicon people too. They'll think like people, act like people, and probably be far smarter and more virtuous than normal humans.

++

To me this is the only truth possible without going against my heritage and country.

Link to comment
Share on other sites

Perhaps you do not need those criteria to consider it a person but it will need to be classified somehow. If it is not classified then how would you differentiate it from a sufficiently advanced A.I ?

 

I classify it based on the ground of neural network transference. An A.I. may have a similar infrastructure, but it would not have originated from a human. I base the difference on a virtual neural network that can undergo biological functions, such as fictional cell-cell signalling and so forth. It all would be virtual, but it would be enough to sustain the person's transference. An A.I. would more than likely be started from a ground-zero state. Otherwise, you would have to say that you simply copied a person's existence to another device. Under metaphysical and logical grounds, we could attempt to say that's not the same person, for no two things can be the same.

 

If both were to have sentience and sapience, had the same level of intelligence and infact the only difference is that one was entirely conceived by humans and our the other, our cyborgs, are merely vessels for a naturally occuring mind would they both be people?

 

Nothing can have the same level of intelligence in a quantitative sense, perhaps even in a qualitative sense: Nothing is the same.

 

Will there be a time thousands of years in the future when us, Homosapiens are considered the common anscestor of silicon brained androids and our cyborg offspring?

 

Yes, both can be considered people.

 

That's a possibility.

 

Assimilate! Resistance is futile!

 

Any other Qs? *rimshot*

Edited by Genecks
Link to comment
Share on other sites

Going back to Ydoaps' initial question about whether a human being replaced with mechanical parts is still the same person I would say yes, as long as the conciousness acts in the same way it did when it was biological.

 

Is it still human? No.

 

It may have "evolved" from humans but aside from it's mind everything is different. What you have created here is a being that is stronger, faster, hardier, lasts longer, more than likely has no need to eat or sleep and is generally in every way superior to Homo Sapiens.

 

They will be free from biological diseases and old age (at least how we would classify old age now) and when parts begin to wear down they could be replaced easily. Over the course of generations more and more humans would convert and because of the low mortality rate in the new android species they would begin to outnumber us within time.

 

While their brains are still the conciousness of a human and considering that the androids are not just storage devices for a conciousness and can still learn, it's very likeley they'll realise they're tougher and better than us.

 

That would lead to all sorts of problems,economical: "people" that don't need to eat or sleep would make great workers no?

 

Social: People that dont need to eat or sleep get all our jobs and so are are prejudiced and looked down upon by society.

 

Historically, seperate races seek independence and I don't think too many of these androids would think differently despite their original minds originating from a particular race, they'll all have something in common.

 

Before I start rambling on too much my conclusion is that they will still be the same person conciously but in all other aspects they are not a member of the human race any longer due to massive differences in their strengths and weaknesses compared to Humans and how they reproduce (if at all).

Link to comment
Share on other sites

Fair comment, Leader Bee.

 

From an anthropological outlook, groups of person whom consider themselves to be a "race" would want to have their own values and system of living. Also, you've brought out the thing about the forever-working android.

 

As such, this is why I have commented that there would be laws preventing transhumans from having varying things, such as resources: Otherwise, after 500 years, the transhuman would have a large bank account.

 

It's curious as to what kind of laws would be created. I'm assuming that any transhuman would more than likely desire to either sit around and do nothing or go do some exciting things, such as reach for the stars. If the android race were to come before space colonies, I suspect the androids would be more than likely be game for making such technologies possible.

 

That statement has to be one of the greatest issues with the transhumanist movement: You're immortal, now what?

If I remember correctly, many statistical studies say that transhumanists are mostly atheists.

 

I'm not stating I have something against atheism. But I do think serious problems come with being a transhumanist and being an atheist at the same time. That would mean there is no God to ask, "Hey, why did you make this universe? Why does it seem like evil exists? Why did things unfold this way? What happened to my Sega Genesis Lion King video game back in the 1990s? Was it stolen? Is it still in the house? Also, who created you? and... Do you like Mudkips? :3" Yes, waste God's time by asking God trivial, annoying questions. I want my Sega Genesis game, though.

 

I think it's a serious issue when it comes to someone who doesn't want to consider a God. That doesn't mean the atheist can't figure out the beginning of the universe and the meaning to everything. It's still a goal. And, who knows, maybe some omnipotent figure would be at the other end, to many's surprise. But if the atheistic android doesn't want to try to solve these cosmological issues, then I could only suspect the person would live many lives, try to be like Duncan McCloud (Highlander), and be a drifter.

 

I think Theodore Kaczynski is a brilliant person. He did something that allowed him to be noticed for his views and philosophy. He did something very, very bad. In order to cause revolution, you need to do something very, very bad or very, very good, thus forcing people to accept the change in reality. And understand my views of how to incite a revolutionary change in societal views, his actions forced people to accept and consider ideas. In general, technology is bad. And people are perhaps better off not becoming technologically advanced. Because if people were to become technologically advanced, this would lead to destruction, such as destruction of society or of Nature. As I like to sum up from the views I have considered by Mr. Kaczynski is this view: What we think is progress is actually self-destruction.

 

The power process

 

33. Human beings have a need (probably based in biology) for something that we will call the "power process." This is closely related to the need for power (which is widely recognized) but is not quite the same thing. The power process has four elements. The three most clear-cut of these we call goal, effort and attainment of goal. (Everyone needs to have goals whose attainment requires effort, and needs to succeed in attaining at least some of his goals.) The fourth element is more difficult to define and may not be necessary for everyone. We call it autonomy and will discuss it later (paragraphs 42-44).

 

34. Consider the hypothetical case of a man who can have anything he wants just by wishing for it. Such a man has power, but he will develop serious psychological problems. At first he will have a lot of fun, but by and by he will become acutely bored and demoralized. Eventually he may become clinically depressed. History shows that leisured aristocracies tend to become decadent. This is not true of fighting aristocracies that have to struggle to maintain their power. But leisured, secure aristocracies that have no need to exert themselves usually become bored, hedonistic and demoralized, even though they have power. This shows that power is not enough. One must have goals toward which to exercise one's power.

 

35. Everyone has goals; if nothing else, to obtain the physical necessities of life: food, water and whatever clothing and shelter are made necessary by the climate. But the leisured aristocrat obtains these things without effort. Hence his boredom and demoralization.

 

36. Nonattainment of important goals results in death if the goals are physical necessities, and in frustration if nonattainment of the goals is compatible with survival. Consistent failure to attain goals throughout life results in defeatism, low self-esteem or depression.

 

37. Thus, in order to avoid serious psychological problems, a human being needs goals whose attainment requires effort, and he must have a reasonable rate of success in attaining his goals.

 

source: Industrial Society and Its Future

 

Can an android appreciate Nature? To accept things at face value and not question them? To learn to live with Nature rather than inquire it?

 

In the discussion of the death of people and androids...

 

I'm not here to say that the death of an android can't occur. Consider that it's attempting to build a plasma shield (window) for a bedroom in a space apartment, and somehow a steel beam falls and shatters the android's neural network components.. Then, that's one less android in the world.

 

Also consider that people commit suicide. Emile Durkheim has a few works about this. But consider that individuals are bound to either get killed or kill themselves. In terms of old age, unless there was a way to constantly repair the mechanics, then the android would break down. It was originally that thesis which made me consider the idea of an android body was not a worthwhile thing, as such I became more of a believer in the idea of keeping a physical body. That along with the memory transference idea. There would be a large hesitation by people to replace their body with mechanics; and more people would support tissue re-genesis techniques rather than a mechanical system.

Edited by Genecks
Link to comment
Share on other sites

To me the major side effects of creating androids would be how the world reacts to them. I doubt that they could walk around living their daily lives like any other human because more extremist groups, such as the Westboro "Baptist" Church, would claim that the Android has no soul, is an abomination, and must be destroyed. However, giving their abilities to exist in an environment without an atmosphere, androids would be well suited for space travel, and they could colonize the moon easily because all they need is a power source, and maybe shelter from the sun when it's at it's brightest.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.