Jump to content

Machine man - paragon or monster?


Atellus

Recommended Posts

I have been reading about the concept of the Technological Singularity, otherwise referred to as the Law of Accelerating Change or perhaps more simply, the Artificial Intelligence revolution.

 

Proponents of this model of the future argue that we will in some way become merged with technology. There are two possible end results: either our own intelligence will be enhanced or augmented or we will simply become semi-autonomous components for greater group intelligences.

 

I don't agree with the assumptions of many of the ideas within this theory, but in this post I would like to focus on one particular aspect as I am interested in peoples reactions to the idea.

 

When you put the idea of merging with technology to people, most exhibit revulsion at the idea. This could simply be a problem of semantics. For instance, the technology in question could simply be a small implant running some sort of software, or it could be the wholesale replacement of parts of the nervous system with computer hardware. The point is that it is not necessarily the case that such individuals will resemble the Borg from Star Trek, which is what I think alot of people assume. There is no justification for this assumption.

 

Many people today willingly have cosmetic surgery, and many of those have cosmetic implants. These implants are not connected to the internet, granted, but they are physically unnecessary additions to those peoples bodies. They do not improve the basic function. Yet these people are quite happy to have them.

 

What if you were offered the option of becoming merged with technology if that meant that you would have a physically beautiful exterior appearance? Would you trade some of your viscera for mechanical parts and link your nervous system to something like the internet if it meant it would give you the power to alter your external appearance so as to enhance your athletic ability and improve your looks? No one would ever see you as a cyborg. You would appear as a well developed, sexually desirable human.

 

Would that cast the idea of merging with technology in a different light?

 

Thank you

 

EDIT: I would like to emphasise that the question is not intended to include those who would accept artifical components in order to correct an injury or disability. The motivation in those instances is completely different and understandable. The question is intended to focus on whether the promise of physical beauty in a perfect body is sufficient to overcome the objection to installing machine components.

Link to comment
Share on other sites

Thank you for your reply.

 

It so happens that I disagree with the whole idea and have concluded that many of the proponents are suffering from an unbearable naivety and a lack of perspective... but all this is completely beside the question. Whether or not you think the Singularity is at all likely in any form, it remains a reasonably plausible suggestion that one day we will have the technology to achieve the scenario the question is framed within. My interest is in the answer to that question.

Link to comment
Share on other sites

(psst, I was being facetious)

 

I disagree with the whole idea and have concluded that many of the proponents are suffering from an unbearable naivety and a lack of perspective

 

If there's anything naive about it, it's trying to put a date on it like it's something you will predict.

 

Once you start hooking brains into computers en masse, or create superhuman intelligence in a computer, everything goes nuts.

 

It's sweeping societal change waiting for an event to set it all off... what James Burke called "the trigger effect"

 

Kurzweil thinks it will happen before 2045, arguing that extrapolating on the present trends, that a $1000 computer will be a million times as powerful as (conservative) estimates of the combined computational power of all human brains.

 

I dunno about that. Lots of static analysis involved in those predictions.

Link to comment
Share on other sites

The revulsion is rooted in the Frankenstein Complex. Less human = less soul, less compassion.

 

The acceptance of artifical enhancment will not come as a discontinuity, it will come gradually, as each incremental advancement is accepted by society.

 

An excellent analogy would be to medico-technical enhancement of fertility, and pregnancy.

Link to comment
Share on other sites

The revulsion is rooted in the Frankenstein Complex. Less human = less soul, less compassion.

 

The acceptance of artifical enhancment will not come as a discontinuity, it will come gradually, as each incremental advancement is accepted by society.

 

An excellent analogy would be to medico-technical enhancement of fertility, and pregnancy.

 

I would agree with you except that if advanced AI gets involved, and can make it self smarter with simple software change, then use its smarter software to make it smarter yet (it will still hit stepped caps due to hardware, which it would then have to engineer and those steps would slow it down due to the rate with which a new hardware component can be prototyped and installed) the rate of growth in the AI's intelligence would be astonding and very fast.

 

Unlike fertility, there would be no long involved testing processes and studies, and rounds of research to conduct at the pace of human lives (people trying to concieve) so it could really take off in a huge hurry.

 

Even when an AI updates its software iterratively, and hits the first "need new hardware designs" cap it would likely have jumped enough to apply itself and rethink a huge amount of humanity's most intelligent design patterns and processes. It would be able to introduce things into society that could not be ignored due to their incredibly high effeciencies but at the same time leave us with incredibly steep changes in how we live our lives in order to take advantage of those changes.

Link to comment
Share on other sites

On the contrary, humans have been taking small steps here and there basically trying to build themselves. Robotics try to mimic our movements and all attempts at human interface seek to resemble us - not us resemble them. Computers and software looks more like humans trying to build a brain - again advanced human interfaces seek to appease our behavior. I could see little applications of computer hardware and software here and there to enhance humans, but not much more than that because I see humans going to organics.

 

It seems to be our desire, whether we realize or not, to build ourselves outside of the design of nature as we know it. I see humans going to organic processes to build. It's much easier and cheaper to give instructions to something and allow it to build itself, rather than constructing multi billion dollar manufacturing processes to build it.

 

We will be enhanced by technology, but it won't be shiny metallic computer looking stuff - it will be ugly, slimy, slippery, bloody organic stuff.

 

Just my two cents....

Link to comment
Share on other sites

I can imagine that when we create real superintelligent artilects, and we provide them with the ability to interact with the physical world, that they will try to wipe us out. If their intelligence exceeds our intelligence (and this will happen somewhere between 2030 and 2050) then at a certain point we may be to them what a fly or musquito is to us. If we annoy them, they simply slam us away.

 

Of course, this only can happen if these artilects are given connections with the physical/real world. They need actuators. But of course we given them those actuators. First, we will use simple artilects for relieving us from simple/boring work, or to help us in hostile places. These artilects will become more and more intelligent, due to advances in technology. At the same time, their sensors and actuators also will become more advanced, also due to advancing technology. We still want them, just like we want a vacuum cleaner or a dish washing machine. But at a certain point we will realize that the machines we made are more intelligent than ourselves. We will not be capable anymore to understand what is going on in their "brain", simply because of the immense intelligence, needed to understand that brain.

 

Things become even more threatening when we also provide the artilects ways of communication, either with each other, or with larger centralized entities, we place in data centers, like we nowadays have data centers doing all kinds of things for us.

 

So, the question is, should we build artilects? Or maybe better, how far should we go, and what capabilities should we give them in the physical world? What if we integrate an artilect with our own body? Would humans, with a 100000-times more intelligent artilect, intimitely connected with their brain, still be human? I don't think so. Their body may look human, but they would be alien to us, we would not understand them at all and they could become a serious threat to us.

 

Answers to these questions are very hard to give. I see advances, but I also see threats in the not so very far future.

Link to comment
Share on other sites

I was wondering about that word...Artilects.

 

But the situation Woelen brings up is the classic machine vs human relationship. We humans hardly ever guess the future that right - just look at Star Trek. The only thing star trek got right was the idea of the kinds of goodies we'd like to play with.

 

Whatever the future brings, it will probably agree with about 10% of what we're talking about here, the other 90% is going to be unexpected...obviously.

Link to comment
Share on other sites

Hugo de Garis has developed a lot of Singularity-related concepts independently, using entirely his own vocabulary. "Artilect" comes from his book The Artilect War, and is essentially his word for Artificial General Intelligence.

 

There's also Cosmists (i.e. Singularitarians) and Terrans (anti-Singularitarians?), as well as Cyborgians (i.e. post-humans/transhumans)

 

Whatever the future brings, it will probably agree with about 10% of what we're talking about here, the other 90% is going to be unexpected...obviously.

 

The Singularity is named after the breakdown of existing models, in this case, of predicting what the future of humanity will be like.

Link to comment
Share on other sites

Thank you all for your replies. Sorry if I was short tempered earlier. After reading Vinge's and Kurzweil's papers and other sources about the topic, it was useful to observe a discussion of the basic premise.

 

My humble conclusion is relatively unchanged from my earlier one. Naivety and lack of realistic perspective. In other words, not enough knowledge of real world human behaviour, particularly in relation to judging which technologies people will "buy into" and thus permit to be developed; wildy optimistic predictions of technological progress based on very selective and exclusive criteria, ignoring obvious variables and confounding influences. However, its easy to see how a group of relatively closeted, specialist academics with a predilection for daydreaming (and publishing those daydreams in the science fiction press ;-) could come to such conclusions.

 

I let the idea of engineering the future of the species buzz around in the back of my head for a few days and came up with a very similar conclusion to that of ParanoiA. The future is squishy and organic. Computer hardware prostheses and/or replacements, if they happen on a large scale and become at all socially acceptable, will be a transition at best - something to facilitate an expansion of capabilities, knowledge and technological applications to the point where we're able to achieve things with biomolecules that we use silicon and copper to do today.

 

With that in mind, I decided that future humans would look extremely similar to us because they can if they want to and they will choose to for reasons of cultural continuity, racial memory and the fact that in order to be human you must retain the physical form of a human. I have a friend who used to watch Stargate and we discussed the arch nemesis of that show which parasitized human bodies. My contention was that these aliens, having existed in human bodies for long enough, would become humans on a psychological level, despite their origins. That, at least, would then excuse the various human behaviours these "aliens" apparently exhibited! The reverse would surely be true if we chose to engineer ourselves into something more convenient, perhaps with more appendages and more efficient and less obstrusive bodily functions. We'd become, physically and then psychologically, something else.

 

Meanwhile, genetic engineering continues to grow and expand and touch our lives in ways never previously envisaged as our understanding of molecular biology increases. I think it would be a very rich man who chose to put money on some other branch of science to be the touchstone of future human development.

 

Thanks, all.

Link to comment
Share on other sites

My humble conclusion is relatively unchanged from my earlier one. Naivety and lack of realistic perspective.

 

That was Hofstadter's basic conclusion.

 

I still contend that AGI or BCI will be revolutionary. The question of when cannot be realistically answered, however.

Link to comment
Share on other sites

I'm finding the concept of BCI increasingly interesting. This would seem to fall more in the category of intelligence augmentation, but I see a flaw in that one is only intelligent to the degree that one understands how to draw useful conclusions from the knowledge one has available.

 

For instance, if an individual had a direct mental link to a vast library of accurate information, that individual could still be ineffective because they'd be limited, first by the capabilities of their search tools and second, by their own ability to absorb and understand and otherwise make sense of any information that they retrieve. No good digging up some post-doctoral work in oceanography when you haven't downloaded and read and understood the oceanography undergraduate file yet.

 

When you consider the definition of intelligence, that is the ability to acquire and apply new knowledge and skills*, it strikes me that the future might be populated by people who are very well educated but not intelligent because while they have an almost unlimited capacity to acquire information, they have no clue how to apply it.

 

*Oxford English Dictionary

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.