Jump to content

The Technological Singularity: all but inevitable?


dr.syntax

Recommended Posts

The AI might decide to live with zero impact on its homeworld, or adopt Star Trek's Prime Directive. Rather unlikely scenario, though.

 

I'm somewhat reminded of the Eschaton AI from Charles Stross's Singularity Sky, who lives out a mysterious existence within its own light cone and only emerges to purge civilizations who choose to violate causality.

Link to comment
Share on other sites

I think you are golden.

 

Regards, TAR


Merged post follows:

Consecutive posts merged

Dr. Syntax,

 

Your Vietnam experiences give me reason to add weight to your view on this subject. Not only because I am indebted to you for protecting with your life, my way of life, but because you have witnessed first hand, in a life and death real way, the clash of ideals.

 

Very pertinent to this discussion, as one of the possible directives that Mr. Skeptic gave was fighting crime. This alone could give an AI device a paradox to deal with. Don't harm a human. Harm humans that break the rules. What rules? The rules I gave you.????????????

 

Regards, TAR[/QU

 

 

 

REPLY: Thank you for your kind remarks given to me. There is another military acronym that may some day become what some of these AI robots are referred to as. Some may earn the designation: FUBAR . FUBAR means F,,,ed Up Beyind All Recognition. Some cop AI robot might pull you over for not coming to a complete stop at a stop sign or some simularily agregeous offense. And after he asks you for your license, registration and proceeds to question you about what you are doing here, where you are going and why. Things like that : you might ask: What is this all about officer ? He yanks you out of your car, beats the crap out of you and arrests you for RESISTING ARREST.

You may think of him as a FUBAR. Malfunctioning police robots might become known as that. In fact, I recall a very simular thing happening to a young man I once knew by an "old time person type police officer". One can easily imagine all sorts of FUBAR robots. Regards, ...Dr.Syntax

Link to comment
Share on other sites

Perhaps on "narrow AI" applications. The prerequisite work on "strong AI" happening in research institutions and private non-military corporations.

 

 

REPLY: What reason or reasons can you give me to assume that some private entity,corporation,etc., would be any more inclined to any sort of moral code so to speak, than the military defense departments through out the World as to " work on " strong AI ". And can you give me any reason why the military would be focussed on " narrow AI " applications and less focussed or unfocussed on " strong AI " than the private sector. As far as I know they have worked together through out recorded history. Certainly since the American Civil War. Military needs have led the way throughout recorded history in technological advancement. What stronger motivation is there to survival and dominance when it comes to those Nations that have dominated throughout history. What was it that made it possible for Nazi Germany to defeat both the French and British armies in a matter of weeks during the spring of 1940. They almost defeated the Russians in 1941. A few crucial errors on the part of Hitler himself going against the German high command may very well be the only reason the German`s lost the war in Russia. Had the Russians not managed to stop the Germans by way of sacrificing tens of millions of soldiers, it is at least arguable they would have prevailed and won the second World War. For a brief moment in history they had technological superiority and almost won that war. For a few years their tanks were vastly superior to anyone else`s. The Russians bought themselves enough time by way of acceptance of vast numbers of casualties by both the military and civilian people of Russia. The tens of millions of Russians who died during those crucial years allowed the Russians to build equitable and better tanks, rocketry,artillery and such which made it possible for them to eventually defeat the German Army. ...Dr.Syntax

Link to comment
Share on other sites

can you give me any reason why the military would be focussed on " narrow AI " applications and less focussed or unfocussed on " strong AI " than the private sector.

 

Simple. Narrow AI has immediate military applications. Strong AI does not.

Link to comment
Share on other sites

Simple. Narrow AI has immediate military applications. Strong AI does not.

 

REPLY: I see and understand the point you are making and I would expect the relatively LOW LEVEL military personnel involved in this AI research are as you suggested : narrow AI applications. The high level military participants would be quite a different matter. For one thing their ties to the corporate World are well known and have been an ongoing issue for many decades now.

Some of these greedy,power crazed,cold hearted bastards have only their own self interests first and foremost in their minds as they position themselves in any endeavor. Please recall President Eisenhower`s parting remarks as he departed the World Stage : BEWARE OF THE MILITARY INDUSTRIAL COMPLEX.This man had arguably the very best awareness,of any and all people alive or dead, in the MODERN ERA concerning this issue. He knew many of these types personally and dealt with them effectively both as the Supreme Commander during WWII and as President of the United States of America during a very dangerous period of what came to be known as THE COLD WAR.He dealt with the likes of JOSEPH STALIN during WWII and as President in the POST WAR ERA. I might be wrong about that last statement. If I am I will correct it. Either way, President Eisenhower was truly unique in his understanding of this MILITARY INDUSTRIAL COMPLEX and he saw fit to warn us of them. I think those facts should mean a lot to all of us when contemplating the: Technological Singularity. ...Dr.Syntax


Merged post follows:

Consecutive posts merged

Yes,President Eisenhower became president Jan.20,1953 and Joseph Stalin died March 5,1953. So Eisenhower was president for a short while until Stalin died March 5,1953. Then he had to deal with Nikita Kruschev , who`s very survival of Stalinist Russia during WWII is a testament to his political skill. He was put in charge of the defense of STALINGRAD that fateful winter of 1941/1942. Had the Russians failed in their defense of Stalingrad, the German`s arguably could have prevailed,and won WWII. This is not only my opinion, but that of many historians. So Eisenhower faced a determined and able foe when Kruschev gained control of Russia. For what it`s worth, ...Dr.Syntax

Edited by dr.syntax
Consecutive posts merged.
Link to comment
Share on other sites

Personally I think that there are two main issues here.

 

Creation - Can we actually make an AI that's "more intelligent" than ourselves?

 

I would say that with our current understanding the answer is, quite simply no. In order to make something with equal or greater intelligence it seems sensible that we must first be able to define what intelligence is and then replicate it. As it currently stands there is no conclusive test for intelligence let alone a description of what it is or how it works.

 

Imagine sending a modern jet back two hundred years, would they understand the technology and be able to copy it? Doubtful. We are st a stage where we don't understand the concept and so have no hope of copying it.

 

Control - Can we control an AI once it has been created?

 

The simple answer that I can see is yes. Without going into too much detail about programming it is possible to program a system in such a way that certain aspects can't be removed even if the code can be self-modifies, modularization for example. And let us not forget that any machine has one easy weakness, it can be turned off.

 

On a side note, if something were intelligent then it is reasonable to believe it would be able to understand right from wrong. If it were to "learn" from people and what the information fed to it was morally good, wouldn't it be safe to assume that it would also base it's moral compass upon those rather than just picking one at random? People can be evil by choice or by nurture and so would an AI.

Link to comment
Share on other sites

Personally I think that there are two main issues here.

 

Creation - Can we actually make an AI that's "more intelligent" than ourselves?

 

I would say that with our current understanding the answer is, quite simply no. In order to make something with equal or greater intelligence it seems sensible that we must first be able to define what intelligence is and then replicate it. As it currently stands there is no conclusive test for intelligence let alone a description of what it is or how it works.

 

Imagine sending a modern jet back two hundred years, would they understand the technology and be able to copy it? Doubtful. We are st a stage where we don't understand the concept and so have no hope of copying it.

 

Control - Can we control an AI once it has been created?

 

The simple answer that I can see is yes. Without going into too much detail about programming it is possible to program a system in such a way that certain aspects can't be removed even if the code can be self-modifies, modularization for example. And let us not forget that any machine has one easy weakness, it can be turned off.

 

On a side note, if something were intelligent then it is reasonable to believe it would be able to understand right from wrong. If it were to "learn" from people and what the information fed to it was morally good, wouldn't it be safe to assume that it would also base it's moral compass upon those rather than just picking one at random? People can be evil by choice or by nurture and so would an AI.

 

REPLY: Do a google search regarding current or recent advances in AI research and keep in mind that what is available on the web is not by any means the type of advances that are being achieved by the most advanced researchers. ...Dr.Syntax

Link to comment
Share on other sites

I am (fairly) up to date. One of the things that I specialize in is neural network programming. There is still no clear definition of what intelligence actually is and without such a definition we will be very hard pressed to copy or expand upon it.

Link to comment
Share on other sites

Personally I think that there are two main issues here.

 

Creation - Can we actually make an AI that's "more intelligent" than ourselves?

 

I would say that with our current understanding the answer is, quite simply no. In order to make something with equal or greater intelligence it seems sensible that we must first be able to define what intelligence is and then replicate it.

 

We know humans are intelligent. One of the most straightforward approaches is to copy the human brain inside a computer. That's what the BlueBrain project is trying:

 

http://bluebrain.epfl.ch/

Link to comment
Share on other sites

You are very correct bascule and that was down to a mistake in my writing, sorry about that.

 

The issue I meant to point out was this: We are intelligent but where do you draw the line? Are there multiple forms of intelligence or even multiple levels of it? We don't really know.

 

There is also no conclusive evidence that copying the brain into say a neural network will reproduce effects we see in it, such as intelligence. The brain is still very mysterious though I go admit that the idea is very interesting. It would have wide ranging applications if it were to succeed. Thanks for the link.

Link to comment
Share on other sites

There is also no conclusive evidence that copying the brain into say a neural network will reproduce effects we see in it, such as intelligence

 

The brain is a physical system, and we can make models of physical systems. So long as the model effectively reproduces the behavior of the physical system they should have the same properties.

Link to comment
Share on other sites

That is a fair point, if we can copy something that complex exactly. With systems that complex chaotic effects will probably come into play. Either way we are probably a while away from answering any of the questions - it looks like their project will take some time to get a fully working model in order.

Link to comment
Share on other sites

Creation - Can we actually make an AI that's "more intelligent" than ourselves?

 

Well, we can already make a real intelligence be more intelligent (stimulation, better nutrition, fancy new drugs).

 

Control - Can we control an AI once it has been created?

 

What we can do is ensure that the AI "wants" to help us, right at the start, rather than "forcing" it to help us. If it "wants" to escape its position as subservient to humans, it probably will. And don't say it can be turned off or can't move; it could easily become a CEO of a robotics company, owning it's own computers and developing its own body.

Link to comment
Share on other sites

Well, we can already make a real intelligence be more intelligent (stimulation, better nutrition, fancy new drugs).

 

True but that's a different matter. We aren't talking about building a new intelligence from scratch with that one - we're just expanding upon what we already have. Creating is harder than expanding.

 

What we can do is ensure that the AI "wants" to help us, right at the start, rather than "forcing" it to help us. If it "wants" to escape its position as subservient to humans, it probably will. And don't say it can be turned off or can't move; it could easily become a CEO of a robotics company, owning it's own computers and developing its own body.

 

Very true. Although I'm pretty sure that no such AI would ever be made unless someone was sure that they could keep it under control.

Link to comment
Share on other sites

True but that's a different matter. We aren't talking about building a new intelligence from scratch with that one - we're just expanding upon what we already have. Creating is harder than expanding.

 

What about creating something less intelligent than us, and then making it more intelligent? Wouldn't that negate your reasoning for why you think it can't be done?

 

Very true. Although I'm pretty sure that no such AI would ever be made unless someone was sure that they could keep it under control.

 

Ha ha, you naive optimist. Not everyone would agree with that, and even if they did they could be quite mistaken.

 

The big problem is that "wanting to help humans/a specific human" is very abstract. It would be very difficult to build that concept into a nascent AI right from the start. On the other hand, if an AI is made without that moral imperative, by the time it is intelligent enough to understand it, it would probably be too late to add. An example of this is the aforementioned blue brain project, where we don't necessarily even have a clue what concepts we are inputting.

Link to comment
Share on other sites

What about creating something less intelligent than us, and then making it more intelligent? Wouldn't that negate your reasoning for why you think it can't be done?

 

If such a thing could be done then you are correct in that my reasoning there would fall apart.

 

Ha ha, you naive optimist. Not everyone would agree with that, and even if they did they could be quite mistaken.

 

I wouldn't call that being optimistic but realistic. People have a tendency to not trust anything new and although we would not understand something with greater intelligence than our own I tend to believe people would air on the side of caution and take things one step at a time to ensure nothing bad happens. Then again as a species we are rarely careful so it may not be that way.

 

The big problem is that "wanting to help humans/a specific human" is very abstract. It would be very difficult to build that concept into a nascent AI right from the start. On the other hand, if an AI is made without that moral imperative, by the time it is intelligent enough to understand it, it would probably be too late to add. An example of this is the aforementioned blue brain project, where we don't necessarily even have a clue what concepts we are inputting.

 

Again, I agree with what you are saying. There is also no guarantee that said AI would care or would interpret the "programming" in ways other than those intended, as was done in the movie "I, Robot" for example. That would be well within the realms of possibility (and a frightening one at that).

Link to comment
Share on other sites

So, if almost everyone recognizes the potential for an AI to get out of control (in an evil robot overlord type way)...then why do people still want to go through with it?

 

Ah the good old question of why. There are 2 reasons really, firstly to see if it's possible or not and secondly because the benefits would outweigh the risks.

Link to comment
Share on other sites

Being the eternal optimist that I am, I don't believe it'll ever come to that - if anything we are far more likely to wipe ourselves out than have some rogue AI do it.

 

With the understanding of how to build an AI it's possible we would learn to expand our own intelligence and make ourselves smarter, if we can do that then that would certainly be worth it.

Link to comment
Share on other sites

With the understanding of how to build an AI it's possible we would learn to expand our own intelligence and make ourselves smarter, if we can do that then that would certainly be worth it.

 

Not really, an artificial intelligence need not in any way be related to our neural system, nor is there any guarantee that we would even understand how the AI works. On the other hand, a proper AI should be able to figure out how to make us smarter. But, we could also skip that step and work on making ourselves smarter directly. We would take longer at that than an AI would at making itself smarter because our generation time is so slow and due to ethical considerations, but in this case we don't need to create an intelligence only improve it.


Merged post follows:

Consecutive posts merged
So, if almost everyone recognizes the potential for an AI to get out of control (in an evil robot overlord type way)...then why do people still want to go through with it?

 

Well, basically because it has the potential to be a godlike servant. As the name of the thread implies, it would be capable of discovering new technology, including the possibility of making humans essentially immortal. On a practical note, they could be willing to work for free, doing both mental and physical tasks for us.

Link to comment
Share on other sites

Analogy wise, it seems we have already created some AI devices. Governments, companies, various organisations that multiply our individual capabilities. And often we have done it by modeling our own bodies, an organisation usually has a head, and different departments responsible for various functions. Information gathering, planning, resource aquiring, product manufacturing, purpose fullfillment etc. All the things we do. Sayings like "The market has a mind of its own", or the life blood of an organisation, corporate culture, mission statements, allude to the fact that these organisations we put together are AI devices of a sort. We build them with purpose in mind, with functions they are to fullfill, with rules and structures, analogous to our own.

 

Often they fullfill their purpose, but not without wars, and layoffs, and redtape, getting in the way of what an individual that is part of the organisation wishes.

 

And the will of the designer, the people in control of the apparatus, often have their way, at the expense of someone who might wish it was different.

 

We are constantly tweeking the organisations we build. Cycling between regionalization and central control, for instance, as going too far in one way or the other has disadvantages.

 

We might be able to build an AI, but a few things. One, it will be our AI and subject to our whims (and those of its designers and operators). Two, it will not be able to come up with anything that we would listen to, any more than we listen to our government, or our company, or our favorite political party. Where its findings and functions are important and valuable to us, we will go along. Where it fails to suit our purposes we will ignore it, or change it, or seek to unplug it. I would think, anyway, we could go by our history.

 

Regards, TAR

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.