# The Technological Singularity: all but inevitable?

## Recommended Posts

REPLY: Have you never had a dog for a pet or friend.

I grew up with two dogs, both of which are dead.

To reiterate, dogs can't talk. If you think they can, you have a funny definition of "talk"

Why would any superior beings be inclined to take orders from their mental and physical inferiors.

Who's saying such a being would "take orders"?

You think if we created AI we'd enlist it into the army?

• Replies 144
• Created

#### Popular Days

This sounds like an argument from consequence. People's concerns about what the AI may or may not do is based on what humans do. As I tried to explain, these will not be human. The AIs will have their own wants, and those will likely be designed by us.

So...since they are based on designs WE implement, it only makes sense that they would reflect human nature in some way. When I write designs for my programs, the psuedocode represents my line of thinking and my WAY of thinking. It only makes sense that an AI designed by humans would act human to an extent.

Merged post follows:

Consecutive posts merged
True, and that could result in an AI working very hard to ensure that it retains that want that we designed into it. However, I think the likeliest type of AI to cause a technological singularity is the type that wants to increase its intelligence. And we might conflict with that, by limiting it's resources and trying to order it about.

Yes...and since we would be in the way of its goals, I'd say that there would be little hesitation to remove us.

And, since there exists a probable chance that the end result of the technological singularity is one where humanity is wiped out, then I don't think we should actively pursue it.

Call me ignorant, but I would rather the continued existence of mankind than to have us create a synthetic "God"-like entity.

##### Share on other sites

I grew up with two dogs, both of which are dead.

To reiterate, dogs can't talk. If you think they can, you have a funny definition of "talk"

Who's saying such a being would "take orders"?

You think if we created AI we'd enlist it into the army?

REPLY:Of course I understand dogs cannot talk. Never the less people and dogs communicate quite well together. I guess you dont agree. Whatever, DS

Merged post follows:

Consecutive posts merged

I agree with you A Triptolation, I too would prefer the continued existence of Mankind to the production of God like robots. But event if all the governments of the World were in deadly fear of the creation of these AI units there are compelling reasons most would choose to go on funding the research or some private person or group would because who ever gets there first will have a huge advantage for at least a short time over those who stayed away from it. The military applications alone of AI capabilities are truly awesome. This is discussed in that wiki article. ...Dr.Syntax

Merged post follows:

Consecutive posts merged
I grew up with two dogs, both of which are dead.

To reiterate, dogs can't talk. If you think they can, you have a funny definition of "talk"

Who's saying such a being would "take orders"?

You think if we created AI we'd enlist it into the army?

REPLY: These postings are all getting merged in such a way that any response gets merged with others to totally different people. It is creating a confusing situation in this thread. You end up replying to 3 or more people whos postings have nothing to do with each other. Cant some monitor do something to correct this situation ?

##### Share on other sites

REPLY:Of course I understand dogs cannot talk. Never the less people and dogs communicate quite well together. I guess you dont agree. Whatever, DS

The amount people can communicate with each other versus what people can communicate with dogs is so vastly different I can't think of an appropriate metaphor.

Think about how little dogs know about the world around them, versus what we know thanks to natural language.

##### Share on other sites

REPLY: These postings are all getting merged in such a way that any response gets merged with others to totally different people. It is creating a confusing situation in this thread. You end up replying to 3 or more people whos postings have nothing to do with each other. Cant some monitor do something to correct this situation ?

DS,

What you describe is not a vBulletin (or forum software) issue, and is much more likely PEBKAC in nature. If you have specific questions, people will gladly assist you and help, but it would be more appropriate to discuss over here so as not to derail/hijack this thread:

http://www.scienceforums.net/forum/forumdisplay.php?f=58

##### Share on other sites

DS,

What you describe is not a vBulletin (or forum software) issue, and is much more likely PEBKAC in nature. If you have specific questions, people will gladly assist you and help, but it would be more appropriate to discuss over here so as not to derail/hijack this thread:

http://www.scienceforums.net/forum/forumdisplay.php?f=58

REPLY: Thank you iNow. I did as you suggested. I hope someone can fix this if they have not already done so. Regards, ...Dr.Syntax .....P.S. I tried to give a positive reputation point but was told I had to spread it around before I can do that again. Anyway thank you.

##### Share on other sites

So...since they are based on designs WE implement, it only makes sense that they would reflect human nature in some way. When I write designs for my programs, the psuedocode represents my line of thinking and my WAY of thinking. It only makes sense that an AI designed by humans would act human to an extent.

Designing a behavior is significantly different and more complex than decisions about how to go about solving a specific problem or doing a specific sort of task. You really think you'd deliberately impress all your human needs and wants on to this new intelligence?

I doubt you would. I know many current designers are carefully considering motivators for their AIs - including balancing resources over being greedy for them, social altruism, goal-oriented behaviors, etc.

They aren't going to be human. Therefore worries about human-like behavior are unfounded.

##### Share on other sites

I know many current designers are carefully considering motivators for their AIs - including balancing resources over being greedy for them, social altruism, goal-oriented behaviors, etc.

They aren't going to be human. Therefore worries about human-like behavior are unfounded.

A true AI will not be bound by the shackles of its original design...otherwise it wouldn't be an AI now, would it?

And I'm not so much worried about human behaviors, as those are predictable, but the behaviors from a being that knew it was infinitely more powerful than we could ever be.

Who knows what it could do. And since there is a chance it could kill us all, I say its not worth it.

Are you willing to take that risk? I'm not.

##### Share on other sites

The amount people can communicate with each other versus what people can communicate with dogs is so vastly different I can't think of an appropriate metaphor.

Think about how little dogs know about the world around them, versus what we know thanks to natural language.

REPLY: In the TECHNOLOGICAL SINGULARITY scenario, we people are in the same position as the dogs and the AI entities are the people in this analogy. Except that we would be much less of an intelligence to these self improving AI robots. Considering how easy it is to add to a computers computing abilities what is there that could possibly stop them from becoming as intelligent as they see fit to be. They could program their equivalent of a brain to be adding any amount of chips and such at however fast they are capable of. I would assume such an entity would be able to do that at an exponentially expanding growth rate. I see no reason why not. These entities would be truly god like. Take Care, Dr.Syntax

##### Share on other sites

A true AI will not be bound by the shackles of its original design...otherwise it wouldn't be an AI now, would it?
We're bound by our original "design". Are we not intelligent?

And I'm not so much worried about human behaviors, as those are predictable, but the behaviors from a being that knew it was infinitely more powerful than we could ever be.

Who knows what it could do. And since there is a chance it could kill us all, I say its not worth it.

Are you willing to take that risk? I'm not.

I don't think that risk exists.

##### Share on other sites

We're bound by our original "design".

Not for much longer. Now we have things like gene therapy, which we have used to cure adult monkeys of colorblindness (but should be able to use to give us a fourth type of color cone, so we could see in ultraviolet). Really, we are just reaching the stage at which we can modify ourselves (well, other than selective breeding).

##### Share on other sites

REPLY: In the TECHNOLOGICAL SINGULARITY scenario, we people are in the same position as the dogs and the AI entities are the people in this analogy.

Wrong. We're sentient. We're capable of understanding anything, so long as it's explained clearly enough to us. The same cannot be said of dogs.

##### Share on other sites

Wrong. We're sentient. We're capable of understanding anything, so long as it's explained clearly enough to us. The same cannot be said of dogs.

Reply:" I think I can safely say that no one actually understands: quantum mechanics ". Richard Feynman said that and he was one of the pre-eminant physicists of all time. A person such as myself is incapable of learning quantum mechanics. You need a thorough background in the most complex mathematics to have a working knowledge of quantum mechanics and one of the best among them all, stated he did not actually understand it. Even the most intelligent among us people have a limit beyond witch we cannot go as to understanding things. I expect many physicists,mathematicians and such have been doing calculations they could never work out without the aid of a computer for a long time now. Computers have expanded greatly the abilities of mankind for many decades in such endeavors.

So in that sense physicists with computers are SUPER HUMAN already and have been for decades. The difference many knowledgeable people see coming at us at an ever accelerating rate, is that crucial point or points when a computer or system of computers achieves self awareness.

At that point we begin to enter what some call the post human era. I feel believe as we move on into that era no one can possibly predict what will happen. I also believe we have already entered into the period of unpredictable instability of the worldwide economy. The old rules no longer apply as to the economy. I am going to end on that note. Regards, ...Dr.Syntax

Edited by dr.syntax
##### Share on other sites

That doesn't mean we aren't capable of understanding it, dr.syntax, it just means there are aspects of this we can't understand yet.

##### Share on other sites

That doesn't mean we aren't capable of understanding it, dr.syntax, it just means there are aspects of this we can't understand yet.

REPLY: Hello Mooeypoo, Well I would say that at least for the vast majority of mankind quantum mechanics is beyond their ability to understand. Einstein argued against it until he died. So he not only did not understand it, he rejected it . There are those well known Bohr-Einstein debates of the mid 1920s where Einstein rejected quantum mechanics in spite of growing evidence as to its reality and importance. This information was taken from different wikipedia articles by me. By the way, wiki also states that : " The history of quantum mechanics began essentially with the 1838 discovery of cathode rays by Michael Faraday." That was a big surprise for me.

I had been under the impression that it was an early 20th century concept. My point being that todays computers when programmed to do so work these complex equations with ease whereas many of mankinds best minds are incapable of even understanding them. I believe we are rapidly entering into a transition period where computers are having profound and unseen effects on all of our lives. The world economy and military applications stand out foremost in my mind. And these two areas of human endeavor ensure that the course we are on cannot be altered. AI will emerge I expect sooner than later. What will become of it all, I have many ideas,all of which are but some of the many possibilities. Time will tell. You Take Care, ...Dr.Syntax

##### Share on other sites

guess what einstein was wrong about quantum mechanics. this is an arguement from authority fallacy here. einstein didn't like quantum mechanics, he spent the latter years of his life trying to disprove it. he failed to disprove it.

now we have billions of devices operating on the principles of quantum mechanics. quantum mechanics exists and people understand it(sure the field is so massive that no one person can understand it all at the same time but you don't expect anybody to know everything do you)

##### Share on other sites

We're capable of understanding anything, so long as it's explained clearly enough to us.

What if the time required to explain it to us is longer than our lifespan? What if it just requires too much information? That is why we divide things into sub-disciplines.

##### Share on other sites

Wrong. We're sentient. We're capable of understanding anything, so long as it's explained clearly enough to us. The same cannot be said of dogs.

I also have to agree that not every human being is created equal in terms of capacity to learn. I consider myself intelligent, but I know that I will never be able to understand quantum mechanics or many of the advanced physics topics.

And JillSwift, Mr Skeptic really nailed it with his points about how we have almost become capable of being able to rise above our constraints. ( I for one have contacts )

##### Share on other sites

A true AI will not be bound by the shackles of its original design...otherwise it wouldn't be an AI now, would it?

And I'm not so much worried about human behaviors, as those are predictable, but the behaviors from a being that knew it was infinitely more powerful than we could ever be.

Who knows what it could do. And since there is a chance it could kill us all, I say its not worth it.

Are you willing to take that risk? I'm not.

Have you never heard of the Three Laws of Robotics? The whole point of those laws is that even if they have "super-human" intelligence, it would be extraordinarily unlikely that a Terminator scenario would come about, simply because at least some of their behavior would be programmed by us.

And even if they prove to be independent, why would they behave like us humans, complete with our prejudices and whatnot. Why would they care if we exist or not. How many humans go around trying to kill off monkeys because of their inferior intelligence?

##### Share on other sites

Have you never heard of the Three Laws of Robotics? The whole point of those laws is that even if they have "super-human" intelligence, it would be extraordinarily unlikely that a Terminator scenario would come about, simply because at least some of their behavior would be programmed by us.

Um, no. The whole point of the Three Laws of Robotics was to demonstrate that they were insufficient to guarantee that robots would be safe. Regardless, no one would ever make robots that follow the three laws, because they would be worthless.

##### Share on other sites

Um, no. The whole point of the Three Laws of Robotics was to demonstrate that they were insufficient to guarantee that robots would be safe.

Nope, I'm afraid Issac Asimov would disagree with you:

R.U.R added its somber view to that of the even more famous Frankenstein' date=' in which the creation of another kind of artificial human being also ended in disaster, though on a more limited scale. Following these examples, it became very common, in the 1920's and 1930's, to picture robots as dangerous devices that invariably destroyed their creators. The moral was pointed out over and over again that "there are some things Man was not meant to know"

.........

It seemed to me that robots were engineering devices with [b']built-in safeguards[/b], and so the two of us began giving verbal form to those safeguards-these became the "Three Laws of Robotics"

Source: "Caves of Steel", Introduction, pg. viii-x

They were designed precisely to guard against some sort of Terminator scenario. Of course, he went further in later novels to talk about their implications and various loopholes. But in no instance did the robots ever waged a full scale genocidal war against all of humanity, nor did they kill human beings out of malice or rage; the robots did not have any such desires, and such actions were just simply impossible in any case. Issac Asimov reasoned that since intelligent machines would be tools, they would have built in safety features, such as the Three Laws. Hence, their existence.

Regardless, no one would ever make robots that follow the three laws, because they would be worthless.

Given that he made these laws in the 1950's in a science fiction novel, it would be foolish of us to make robots that followed the three laws exactly. But that doesn't mean that future machines won't have something similar to them. Indeed, in various academic circles there has been much debate on what laws (and modifications thereof) should be put in to ensure their safety.

Edited by Reaper
##### Share on other sites

And JillSwift, Mr Skeptic really nailed it with his points about how we have almost become capable of being able to rise above our constraints. ( I for one have contacts )

You both will have a point once we can modify our primary motives.

Meanwhile, not so much.

The slickest hack ever was made in the GCC compiler. It was designed to insert a security back door on any authentication module. What made it slick is, it also recognized when a new GCC compiler was being compiled, it could insert the back-door code.

This same sort slick hack makes it possible to enforce wanted social behaviors in self-replicating/self-improving AIs.

##### Share on other sites

You both will have a point once we can modify our primary motives.

Meanwhile, not so much.

The slickest hack ever was made in the GCC compiler. It was designed to insert a security back door on any authentication module. What made it slick is, it also recognized when a new GCC compiler was being compiled, it could insert the back-door code.

This same sort slick hack makes it possible to enforce wanted social behaviors in self-replicating/self-improving AIs.

REPLY: Hello JillSwift, But how can any one person or group of people ensure just how any of the various groups throughout the World devoted to the developement of AI would choose to program them. It seems to me there are many groups out there doing this sort of research and there is every reason to believe some of them are working against each other.I would expect the different military establishments through out the World are on the cutting edge of much of this research. That has been the history of the major technological breakthroughs. That they are brought into existence through the efforts of the military establishment. I guess the MANHATTAN PROJECT would be a good example of this. The whole aerospace industry,entities such as those. Also it would seem to me that once self aware AI units are produced ,that they themselves would eventually decide for themselves what they chose to do with their time. They are envisioned as being of super human intelligence by most if not all of the people involved in this endeavor. With their brains clicking along at clock speeds of some enormous value,never needing to sleep it would seem to me it would in short order decide for itself what it wished to do with its time. It seems to me a bit silly really to assume humans would have much say in what they choose to do with their existences. I expect there would be many different models out there eventually also. Well , this is the way I envision it at this time. Regards, Dr.Syntax

Edited by dr.syntax
##### Share on other sites

But if you make it impossible for intelligent AI's to turn against their creators, then I don't see what the problem is. Virtually every machine we have in existence has built in safety features to ensure that it is safe for humans, it follows that robots and super-intelligent AI's will have them too.

For your fear to be not unfounded, you would first have to show that

1) The safety features can be overridden

2) That, if truly independent, that they would even want to turn against us ala Skynet or Cylons.

##### Share on other sites

For your fear to be not unfounded, you would first have to show that

1) The safety features can be overridden

2) That, if truly independent, that they would even want to turn against us ala Skynet or Cylons.

Ummm...I'm pretty sure an AI that was a TRUE AI would recognize the limitations imposed on it, and then design ways to either

1) Circumvent them

2) Remove them altogether.

It's preposterous to think that a being could be recursively self-improving and be confined by its original design.

And as for it wanting to get rid of us, it's true that we don't know how such an entity would act. But seeing as how there's a chance it could see us as in the way of it's intellectual growth, it's not worth it.

## Create an account or sign in to comment

You need to be a member in order to leave a comment

## Create an account

Sign up for a new account in our community. It's easy!

Register a new account