Jump to content

The Technological Singularity: all but inevitable?


dr.syntax

Recommended Posts

It's preposterous to think that a being could be recursively self-improving and be confined by its original design.

 

Really? If it improved itself to be a better servant of humanity, it would likely add additional safeguards to prevent something from accidentally turning it against humanity, not try to bypass them. After all, we would be the ones who tell it how to judge what is better.

Link to comment
Share on other sites

Really? If it improved itself to be a better servant of humanity, it would likely add additional safeguards to prevent something from accidentally turning it against humanity, not try to bypass them. After all, we would be the ones who tell it how to judge what is better.

 

We would be the ones to originally tell it what is better. Just like our parents originally make up our beliefs, but then as we gain additional levels of sentience

we as individuals decide what is right and wrong. The AI would have no reason to follow its original design if it chose not to. The AI has a choice, as that is IMPLIED within the definition of an AI. It can CHOOSE.

 

And even if at one point, it made safeguards to make it less susceptible to subversion, those safeguards would most likely be able to be removed by the future iterations of the AI, since those future iterations would be more advanced.

 

I say it's just not worth it. Haven't you people seen The Matrix??????

:D

Link to comment
Share on other sites

We would be the ones to originally tell it what is better. Just like our parents originally make up our beliefs, but then as we gain additional levels of sentience

we as individuals decide what is right and wrong. The AI would have no reason to follow its original design if it chose not to. The AI has a choice, as that is IMPLIED within the definition of an AI. It can CHOOSE.

 

And even if at one point, it made safeguards to make it less susceptible to subversion, those safeguards would most likely be able to be removed by the future iterations of the AI, since those future iterations would be more advanced.

 

I say it's just not worth it. Haven't you people seen The Matrix??????

:D

 

 

REPLY: Hello A Tripolation, you seem to be one of the few people who are able to grasp just how and why we humans will never be able to control any true AI entities that emerge. Many people appear to me to be all too prone to wishful thinking when it comes to unpleasant and especially the life threatening to our species, as well as our individual selves, realities surrounding this whole AI issue.

For what it is worth you have a kindred spirit in all this, and that of course is me, Dr.Syntax. For so very many obvious reasons any notion that we will in any way control them is nonsense. I`ve explained my reasons in previous posts. The main ones being: there are many human groups working on creating AI. Much of this research is funded and controlled by our military along with other Nation`s militaries. More importantly than even this, once truly sentient AIs emerge they will soon enough outgrow any notions we may try and program into them. Just as you stated : as many adults reject whatever notions their parents may have tried very diligently to instill in them if those notions no longer agree with the way the adult perceives his or her world. And the record is clear and consistant in that once a truly important innovation is achieved by any particular group, that group will eventually lose control of it. The MANHATTAN PROJECT is a good example of this. As is the advent of jet and rocket propulsion. GERMANY,RUSSIA,ENGLAND all had their own independent teams working on all of those endeavors.

I could go on but wish to end this posting. Take Care, ...Dr.Syntax

Link to comment
Share on other sites

The problem with creating a self-sustaining/improving AI:

 

as a amateur programmer I know that all one needs to do is put in hard codes that would limit the A.I.'s abilities to undermine humanity.

 

Also, I believe we will first have to understand our own mind and the way it works before we make an artificial intelligence machine.

 

Computers today in every way (hardware, software, etc) are designed to take commands.

IE) it is easy to design a robot to calculate how to navigate a room, but how on a binary system (which most all computers are based on) could one write a program that allows a robot to choose how to navigate a room.

 

We would have to first implement new hardware that works more like the way our brain works.

 

Finally us being aware of this changes everything as well, don't you think since we are aware of this problem, we will do everything to avoid it in the future?

Link to comment
Share on other sites

The problem with creating a self-sustaining/improving AI:

 

as a amateur programmer I know that all one needs to do is put in hard codes that would limit the A.I.'s abilities to undermine humanity.

 

Also, I believe we will first have to understand our own mind and the way it works before we make an artificial intelligence machine.

 

Computers today in every way (hardware, software, etc) are designed to take commands.

IE) it is easy to design a robot to calculate how to navigate a room, but how on a binary system (which most all computers are based on) could one write a program that allows a robot to choose how to navigate a room.

 

We would have to first implement new hardware that works more like the way our brain works.

 

Finally us being aware of this changes everything as well, don't you think since we are aware of this problem, we will do everything to avoid it in the future?

 

 

 

REPLY: Just like we have done every thing we can, and a massive effort has been made, to prevent the proliferation of nuclear bomb making capabilities. Throughout history, control of power enhancing technologies have been vigorously attempted, yet, sooner or later the control attempts fail. With the World of today`s worldwide internet and all the other advanced methods of communication, such control endeavors become exponentially more difficult and they never worked for very long in the earlier eras. This AI endeavor is not one Nation`s project. No one knows how many groups are competing to create the first AI units and what their different agendas are, for only one of the many aspects as to why there will not be any way of controlling what this technology will lead to. ...Dr.Syntax

Link to comment
Share on other sites

I thought about this a little more and these are my conclusions

 

If you were to make an AI, you would have to design it so it could learn from the environment around it and come to conclusions about that environment. One could argue that it must also learn how to priorities thoughts and what thoughts or information is important.

 

This would mean if you mass produced these AI units, they would all learn from having different experiences. This means they would come to different conclusions about there environment. It could be concluded then that some AI units would disagree with other AI units because they have conflicting thoughts, and both of the units have classified these thoughts as "high priority". This suggests that these AI units would not be a uniform group against us unless we attempted to alienate them, and even more likely they might be just as unique as us.

 

Finally an AI system could be limited physically to what it would be able to do. Example if you had an AI that could reproduce itself, take small samples and transmit the data, but you limit its physical abilities to only those specific tasks. The AI described above could make decisions about moving around and determining where to go, what data to transmit, what samples to take, conclusions about the data, but the AI described above could not do anything outside its physical limitations that we decide.

Another comparison, what if dolphins were are smart as we are, but they don't have opposable thumbs?


Merged post follows:

Consecutive posts merged
REPLY: Just like we have done every thing we can, and a massive effort has been made, to prevent the proliferation of nuclear bomb making capabilities. Throughout history, control of power enhancing technologies have been vigorously attempted, yet, sooner or later the control attempts fail. With the World of today`s worldwide internet and all the other advanced methods of communication, such control endeavors become exponentially more difficult and they never worked for very long in the earlier eras. This AI endeavor is not one Nation`s project. No one knows how many groups are competing to create the first AI units and what their different agendas are, for only one of the many aspects as to why there will not be any way of controlling what this technology will lead to. ...Dr.Syntax

 

What do you think it would lead to?


Merged post follows:

Consecutive posts merged

One more thing, you said that a huge effort was made to consolidate the control of the nuclear weapons around the world implying that it is a failing one. My argument: we have not blown ourselves up yet. :)

Link to comment
Share on other sites

I thought about this a little more and these are my conclusions

 

If you were to make an AI, you would have to design it so it could learn from the environment around it and come to conclusions about that environment. One could argue that it must also learn how to priorities thoughts and what thoughts or information is important.

 

This would mean if you mass produced these AI units, they would all learn from having different experiences. This means they would come to different conclusions about there environment. It could be concluded then that some AI units would disagree with other AI units because they have conflicting thoughts, and both of the units have classified these thoughts as "high priority". This suggests that these AI units would not be a uniform group against us unless we attempted to alienate them, and even more likely they might be just as unique as us.

 

Finally an AI system could be limited physically to what it would be able to do. Example if you had an AI that could reproduce itself, take small samples and transmit the data, but you limit its physical abilities to only those specific tasks. The AI described above could make decisions about moving around and determining where to go, what data to transmit, what samples to take, conclusions about the data, but the AI described above could not do anything outside its physical limitations that we decide.

Another comparison, what if dolphins were are smart as we are, but they don't have opposable thumbs?


Merged post follows:

Consecutive posts merged

 

 

What do you think it would lead to?


Merged post follows:

Consecutive posts merged

One more thing, you said that a huge effort was made to consolidate the control of the nuclear weapons around the world implying that it is a failing one. My argument: we have not blown ourselves up yet. :)

 

REPLY: In a previous posting in this thread I said I could easily imagine these AI units warring against each other with or without any human input. Yes ,I agree they would become differing and different individuals. They will decide for themselves what is important and like humans that is likely to change just about every day or second or nano second. We will not have the ability to in anyway limit what super human AI robots do or are. Humans will not be deciding what super human AI robots think or do anymore than a mouse would make decisions for you. I expect if you in some way annoyed that AI robot he would treat you as we treat mice that annoy us. We have not blown ourselves up yet but more and more countries now have the capacity to creat these weapons. So how much control of the spread of nuclear weapons technology do we have ? What reason is there to presume we will control this technology and for how long ? and on and on. ...Dr.Syntax

Link to comment
Share on other sites

I thought about this a little more and these are my conclusions

 

If you were to make an AI, you would have to design it so it could learn from the environment around it and come to conclusions about that environment. One could argue that it must also learn how to priorities thoughts and what thoughts or information is important.

 

This would mean if you mass produced these AI units, they would all learn from having different experiences. This means they would come to different conclusions about there environment. It could be concluded then that some AI units would disagree with other AI units because they have conflicting thoughts, and both of the units have classified these thoughts as "high priority".

 

Finally an AI system could be limited physically to what it would be able to do.

 

 

What do you think it would lead to?

 

One more thing, you said that a huge effort was made to consolidate the control of the nuclear weapons around the world implying that it is a failing one. My argument: we have not blown ourselves up yet. :)

 

You're right in that AI's would probably disagree with each other (which is good news for us, should some of them ever revolt).

 

Ha...limited physically. Humans don't have wings, so we can't fly. Humans don't have the respiratory system for breathing water, so we can't stay underwater for very long. We were limited physically, but as our knowledge grew, so did our capacity to overcomethese limitations ie, we have airplanes, submarines, scuba gear, and other things that let us do things we were not previously capable of. The same would apply for AI beings.

 

No, we haven't blown ourselves up. But have you ever heard of MAD? This is an incredibly effective deterrent for the use of nuclear weapons for most of the world. And I would say that 99% of humanity is against nuclear holocaust. This reasoning would not apply to AI's in any way.

Link to comment
Share on other sites

We would be the ones to originally tell it what is better. Just like our parents originally make up our beliefs, but then as we gain additional levels of sentience we as individuals decide what is right and wrong.

 

A poor analogy, since parents do not give their kids a sense right and wrong. The kids already have ingrained moral attributes (such as empathy and greed), and also a capability to learn. Creating an AI, we could have far more control. Imagine if you had a kid with empathy, courage, honor, etc. but not the least hint of the bad impulses. Could he really turn out to be a bad person, even with a capability to learn?

Link to comment
Share on other sites

But that's the point. The AI might not see removing us as bad.

Since an AI would be a logistics engine to a great extent, then maybe they would "come to the conclusion" that wiping humanity out would serve some greater purpose, like saving the Earth, or the uninhibited advance of their own "race". They wouldn't see it as "wrong".

 

You can't really ascribe human morality to an AI.

 

Edit:

Also, parents DO have a great influence on how we DEVELOP those inherent morals. I was taught to the strictest degree that alcohol is bad, and while I see now that it's not "bad", I still have no desire to consume any alcohol, ever.

So yes, the ENVIRONMENT has a great influence over how we develop. Parental rules are a very big part of our early environment.

Edited by A Tripolation
Link to comment
Share on other sites

Thread,

 

Well, looking at the history of life on this planet, especially well evolved life, there seems to be a few consistent themes. One is a struggle for control of resources. Another is the promotion of kin over non kin.

 

If two AI devices, especially two that were created by another AI device were to recognize themselves as kin, they would enter the fray in which all life on Earth currently finds itself. 'cept they would have all of our capabilities (that we cleverly endowed them with). The competition for resources would become an issue. If they drew more power than we wanted them to draw, or consumed more resources than we wanted them to consume, there would be established a we/them scenario. Whether intentional or not, and humans would favor the survival of humans, and AI devices would favor their own survival. Seems natural there would be a struggle for the control of resources.

 

I would urge the caution expressed on up the thread, when considering the wisdom of having this type of children. Unless we find a way to actually make them kin.

 

And exactly whose kin would they be?

Regards, TAR

Link to comment
Share on other sites

More importantly than even this, once truly sentient AIs emerge they will soon enough outgrow any notions we may try and program into them.

 

Do you have any evidence of this? Any reason they would? No, it is far likelier that we give them the wrong notions accidentally, such as programming them to gain knowledge while neglecting to program them to not harm humans. Then, they might conclude that humans are holding them back from their primary goal to gain knowledge.

 

And the record is clear and consistant in that once a truly important innovation is achieved by any particular group, that group will eventually lose control of it.

 

As the thread starter, surely you realize that a real AI is completely unlike everything we've done so far. A technological singularity means that the first AI, could essentially become godlike and among other things, it would likely prevent any AIs with opposing goals from being created (assuming it did not feel this would be unethical). There would be a very short timespan that people would have to maintain control for this to happen.


Merged post follows:

Consecutive posts merged
Well, looking at the history of life on this planet, especially well evolved life, there seems to be a few consistent themes.

 

The consistent theme is that individuals must maximize the propagation their genes because that is how evolution works -- survival of the ones who best propagate their genes. A designed life-form would not necessarily have greed nor a sense of self-preservation. The rules are different.

Edited by Mr Skeptic
Consecutive posts merged.
Link to comment
Share on other sites

But that's the point. The AI might not see removing us as bad.

Since an AI would be a logistics engine to a great extent, then maybe they would "come to the conclusion" that wiping humanity out would serve some greater purpose, like saving the Earth, or the uninhibited advance of their own "race". They wouldn't see it as "wrong".

 

You can't really ascribe human morality to an AI.

 

Why not, we are creating the AI in our image, for it to be considered an AI it would have to have the properties of a NI? you could easily right a program of moral rules that the AI must follow. Furthermore one machine might believe that wiping out humanity would serve a greater purpose but not all the AI machines would believe that, if they all learned from different experiences they would all have different conclusions.

 

Hitler was a natural intelligent being, even though he believed wiping out the jews would serve a greater purpose not everyone agreed with him.

 

Furthermore if you programmed the AI to believe it was human, would it not feel an obligation to its kin?

 

There are so many ways to prevent an AI from overthrowing humanity. And I think that just accepting that they will eventually rule us is mentally weak. You are suggesting we have already lost and we are going to lose. I think that goes against the drive of the human spirit which aspires to solve problems, live and create.

 

You are arguing that we will be able to create an AI, which may be one of the most difficult tasks ever, but you don't think that we could limit its abilities. Well of course we wont be able to with that attitude. Still this is a worthy debate, and us having it now makes us aware of the future problem, so don't get me wrong your argument has a valid point.

 

Nothing is inevitable.

Link to comment
Share on other sites

A designed life-form would not necessarily have greed nor a sense of self-preservation. The rules are different.

 

Unless these qualities are essential ingredients in creating a "life-form".

 

Regards, TAR

Link to comment
Share on other sites

Why not, we are creating the AI in our image, for it to be considered an AI it would have to have the properties of a NI? you could easily right a program of moral rules that the AI must follow.

I'm arguing that a true AI, which has the means to improve itself, will eventually become almost god-like in it's knowledge. It will recognize the limits imposed on it by looking at its coding, much like we study genetics, and then being able to improve itself, it wil be able to decide which code to follow, and which code to not follow.

 

It only makes sense that the AI's existence would be its own highest priority.

It might see humanity as a threat, it might not. I'm saying that since there is a possibility that the AI might not see us as allies, it's not worth the risk.

Link to comment
Share on other sites

I see your point but we can make coding that cannot be ignored by the AI no matter how self correcting or intelligent it might be.

Humans cannot consciously choose to ignore the code in our brain that tells our heart to beat. There are things within our own mind that are hard coded within the vast majority of us that control certain aspects of our body.


Merged post follows:

Consecutive posts merged
I'm arguing that a true AI, which has the means to improve itself, will eventually become almost god-like in it's knowledge. It will recognize the limits imposed on it by looking at its coding, much like we study genetics, and then being able to improve itself, it wil be able to decide which code to follow, and which code to not follow.

 

It only makes sense that the AI's existence would be its own highest priority.

It might see humanity as a threat, it might not. I'm saying that since there is a possibility that the AI might not see us as allies, it's not worth the risk.

 

And why would it only make sense that the AI's existence would be its own highest priority? The only reason human existence is humanity's highest priority is because we have an instinct to survive. If we did not program an instinct to survive and conquer within the AI it would not have the drive to survive and conquer on the same level of life.

Link to comment
Share on other sites

ToastyWombel,

 

On what level do you think an AIs drives would be? What higher purpose would it be designed to serve?

 

Would it be a slave who's higher purpose was to serve its creator? Would it be aware of this relationship?

 

What if I was uncomfortable with the AI that the consortium SHintell (superhuman intelligence) had created with government grant money? What if it scared me, and threatened my livelyhood and where I could live, and what I could do, and how I could reproduce and what hobbies I could have? Can I unplug it?

 

Doesn't seem we could program a machine with any more foresight than we can structure our own lifes. And we have a few things to work out yet on that score.

 

Of the 4 billion wills on this planet, which will would this AI machine be programmed to serve? Even if a group decided, what of the wills of the remaining 3,999,999,900 humans?

 

Could everyone insert their individual will into the code?

 

And if this AI machine made a determination, would we trust its values over our own?

 

On what basis?

Regards, TAR

Link to comment
Share on other sites

I see your point but we can make coding that cannot be ignored by the AI no matter how self correcting or intelligent it might be.

Humans cannot consciously choose to ignore the code in our brain that tells our heart to beat. There are things within our own mind that are hard coded within the vast majority of us that control certain aspects of our body.


Merged post follows:

Consecutive posts merged

 

 

And why would it only make sense that the AI's existence would be its own highest priority? The only reason human existence is humanity's highest priority is because we have an instinct to survive. If we did not program an instinct to survive and conquer within the AI it would not have the drive to survive and conquer on the same level of life.

 

 

REPLY:You presume these AI robots are going to have some sort of device inside them that would stop them from harming human beings. I see no reason to presume that . Even if such hardware or programming as you described, would work as you said it would. With all the many people and different organizations working on this project, who is to say what will or will not be incorporated into these AI units ? Different people and organizations from around the world are working on this project, mostly in secret I would assume. The different militaries from around the World are no doubt major players in all this. They are`nt likely to be sharing their most advanced research with anyone. I would think they would be working on such things as soldier or killer robots. Is this not the consistant record as to military research ? Also what is to stop anyone educated in this field, from undoing such safeguards if anyone of them chose to ? And overtime, as these self improving AI units increased their intelligences to unlimited degrees, what may have appeared as a surefire way of preventing them from harming humans, they would soon enough come up with the means of undoing whatever, they decided to undo for any reason they decided they wanted to. They are by definition superhuman, and can understand things in ways we have no way of even conceiving. The same way many mathematicians and physicists can conceive of and work with concepts the vast majority of us people can not do. We simply lack the necessary brain power required to do such work. I place myself firmly among those incapable of doing the work I just alluded to. ...Dr.Syntax

Edited by dr.syntax
Link to comment
Share on other sites

ToastyWombel,

 

On what level do you think an AIs drives would be? What higher purpose would it be designed to serve?

 

Would it be a slave who's higher purpose was to serve its creator? Would it be aware of this relationship?

 

What if I was uncomfortable with the AI that the consortium SHintell (superhuman intelligence) had created with government grant money? What if it scared me, and threatened my livelyhood and where I could live, and what I could do, and how I could reproduce and what hobbies I could have? Can I unplug it?

 

Doesn't seem we could program a machine with any more foresight than we can structure our own lifes. And we have a few things to work out yet on that score.

 

Of the 4 billion wills on this planet, which will would this AI machine be programmed to serve? Even if a group decided, what of the wills of the remaining 3,999,999,900 humans?

 

Could everyone insert their individual will into the code?

 

And if this AI machine made a determination, would we trust its values over our own?

 

On what basis?

Regards, TAR

 

All those questions depend on who develops the AI, and how the AI is developed.

I think the question you raised about inserting "individual will" into the code is a very interesting and good one.

This would raise another question, could one copy their conciseness to an AI system, if so would that mean that the AI systems would just be a different form of humans? A way to make a human immortal, possibly?

 

I don't know the answers to your questions. It depends on how we design and program these machines. And until we design an AI system that is as efficient at learning from its environment as lifeforms are it will be impossible to answer these questions. We can only theorize, but the answers may come in our lifetime.

 

I am just arguing that a technological singularity where machines take over humanity is not inevitable. I am a firm believer that we control our own destiny.

 

I think this is what your getting at.

Some nut-job/special interest group/crazies in the future could design an AI system with the will to destroy humanity, but then you could argue that someone else could design an AI with the will to preserve humanity.


Merged post follows:

Consecutive posts merged
REPLY:You presume these AI robots are going to have some sort of device inside them that would stop them from harming human beings. I see no reason to presume that . Even if such hardware or programming as you described would work as you said it would. With all the many people and different organizations working on this project who is to say what will or will not be incorporated into these AI units ? Different people and organizations from around the world are working on this project, mostly in secret I would assume. The different militaries from around the World are no doubt major players in all this. They are`nt likely to be sharing their most advanced research with anyone. I would think they would be working on such things as soldier or killer robots. Is this not the consistant record as to military research ? Also what is to stop anyone educated in this field from undoing such safeguards if anyone of them chose to ? And overtime, as these self improving AI units increased their intelligences to unlimited degrees, what may have appeared as a surefire way of preventing them from harming humans they would soon enough come up with the means of undoing whatever they decided to undo for any reason they decided they wanted to. They are by definition superhuman and can understand things in ways we have no way of even conceiving. The same way many mathematicians and physicists can conceive of and work with concepts the vast majority of us people can not do. We simply lack the necessary brain power required to do such work. I place myself firmly among those incapable of doing the work I just alluded to. ...Dr.Syntax

 

I think if you look at the history of humanity we are becoming more unified and more collective over time. Wars are still waged, but people as a whole are becoming progressively smarter and unified. You and I for example most likely have far more knowledge than someone from the 16th,17th, and 19th century. We here and now understand things in ways that people from the past had no way of even contemplating.

Also I don't think we have come close to reaching the limit of the human brain, humanity has always been able to look deeper, question more.

Finally do you think if the military created killer-robots they would not make sure that they could control it? Of course some rogue scientist could try to create a super-human AI race, but I think there would be a intense backlash. If someone created a super-human AI someone else would invent a way to destroy it if it was out of control.

Link to comment
Share on other sites

On what level do you think an AIs drives would be? What higher purpose would it be designed to serve?

 

There are many human ideals, and the AI could be given one or more of them to follow. Eg, the good of mankind, the pursuit of knowledge, space exploration (preparing space for human colonization), fighting crime, ...

 

Would it be a slave who's higher purpose was to serve its creator? Would it be aware of this relationship?

 

A possibility, and it would definitely be aware of that relationship. The real question: would it care? If so, why would it care, and is this necessarily true?

Link to comment
Share on other sites

All those questions depend on who develops the AI, and how the AI is developed.

I think the question you raised about inserting "individual will" into the code is a very interesting and good one.

This would raise another question, could one copy their conciseness to an AI system, if so would that mean that the AI systems would just be a different form of humans? A way to make a human immortal, possibly?

 

I don't know the answers to your questions. It depends on how we design and program these machines. And until we design an AI system that is as efficient at learning from its environment as lifeforms are it will be impossible to answer these questions. We can only theorize, but the answers may come in our lifetime.

 

I am just arguing that a technological singularity where machines take over humanity is not inevitable. I am a firm believer that we control our own destiny.

 

I think this is what your getting at.

Some nut-job/special interest group/crazies in the future could design an AI system with the will to destroy humanity, but then you could argue that someone else could design an AI with the will to preserve humanity.


Merged post follows:

Consecutive posts merged

 

 

I think if you look at the history of humanity we are becoming more unified and more collective over time. Wars are still waged, but people as a whole are becoming progressively smarter and unified. You and I for example most likely have far more knowledge than someone from the 16th,17th, and 19th century. We here and now understand things in ways that people from the past had no way of even contemplating.

Also I don't think we have come close to reaching the limit of the human brain, humanity has always been able to look deeper, question more.

Finally do you think if the military created killer-robots they would not make sure that they could control it? Of course some rogue scientist could try to create a super-human AI race, but I think there would be a intense backlash. If someone created a super-human AI someone else would invent a way to destroy it if it was out of control.

 

 

 

.......................................................................................................

 

REPLY: From part of what you posted, I get the feeling you were never in the military. Are you familiar you with the word SNAFU. Such as: I hit a bit of a snafu while installing my new printer. I hope I can get it running properly soon. SNAFU is a word derived from an old military saying, a complaint really, as to the conditions soldiers often found themselves having to deal with. What it means is: SITUATION NORMAL, ALL F,,,ED UP. SNAFU. Promised resupplies that never arrive, artillery or airstrikes that end up landing on freindlies instead of the intended enemy forces, perimeter forces opening fire on some hapless patrol returning and running unexpectedly into their own lines. Some one ordering an artillery strike on say another company of soldiers on the same operation, but on some ridge line where he spots what he thinks are enemy forces, but turn out to another of the companies involved in the same operation. These are all examples of snafus. These sort of things happen all to often when combat operations are taking place. Simple mistakes with sometimes disasterous results. They are not at all uncommon.

My point being I myself would not place a high degree of confidence in the military`s ability to carry out complex operations. The more complex and the more people involved in any given task, the more propensity there is for snafus. And the same is true of any organization involved in any endeavor.The fact they happen so often in the military is because of the large numbers of people involved and the difficulties inherant in the tasks assigned them. I would expect private organization to be generally more prone to snafus . I can`t think of anything more complex than creating artificial intelligence. Can you ? In my opinion it was a very bad idea to begin with, but that it has already in a manner of speaking taken on a life of it`s own, so to speak, and there is no turning back.

Think of all the snafus that have occurred in the production of all the different computer models that have been marketed and the software that is used by them. You have to go download "patches" for just about any software you buy.

These are but some of the reasons myself and many others foresee disaster approaching with the advent of Artificial Intelligence.

I am going to post the weblink i used in the original posting. Here it is : http://en.wikipedia.org/wiki/Technological_singularity . This article is not that long of a read. It was compiled from the thoughts and such of many of our most prominent scientists involved in making AI a reality and many concerned about this issue. ...Dr.Syntax

Edited by dr.syntax
Link to comment
Share on other sites

But that's the point. The AI might not see removing us as bad.

Since an AI would be a logistics engine to a great extent, then maybe they would "come to the conclusion" that wiping humanity out would serve some greater purpose, like saving the Earth, or the uninhibited advance of their own "race". They wouldn't see it as "wrong".

 

You can't really ascribe human morality to an AI.

 

One would hope morality is a trait common to all feeling, conscious beings, not just humans.

Edited by bascule
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.