Jump to content

What Can Be Done To Protect Us From the Dangers of the Technological Singularity


dr.syntax

Recommended Posts

What can be done to protect us from the dangers associated with the TECHNOLOGICAL SINGULARITY ? Apparently, many reputable scientists believe it to be all but inevitable. Please go to: [ http://en.wikipedia.org/wiki/Technological_singularity ] for an interesting and informative discussion by many reputable scientists involved in this research and other scientists who voice their concerns about this issue. I have expressed my concerns in the thread titled: "The Technological Singularity: All but inevitable ?".

I have NO answers to this vitally important question. I am not a computer scientist nor am I knowledgeable in any related fields of scientific endeavor. I am a very concerned fellow human being.

My friend and fellow forum member: Mr Skeptic has lead me to the conclusion that our only hope for managing and controlling this event is to ensure that the most ethically minded researchers are the ones we need to find ways of ensuring they are the ones to make the breakthroughs and are the first to create what has come to be known as artificial intelligence.

I wish to end this posting with a word of caution to those scientists involved in this project to use good judgement and never speak of anything that is not considered GENERAL KNOWLEDGE, that wich is well known by all in your respective fields. Be as vague as you wish to, things like that. Sincerely, ...Dr.Syntax

Link to comment
Share on other sites

If I read you right, you mean "involved in this project" being the singularity or the AI that would bring it about? If so, the technological singularity is an emergent property produced by ongoing research that has a rate of increasing returns due to the output of any research cycle becoming input for the next research cycle - but there is no single project.

 

 

As far as protecting us from the dangers - Bascule says it right with "not much" though as he also says that's the short answer. I am not too worried, because honestly the biggest reason to be afraid of the singularity is that it posses dangers we cannot begin to understand... and yet as we continue up the curve we will be learning ways to adapt we cannot yet comprehend.

 

That's the whole reason we call it a singularity - all measurements and estimates and predictions fall apart once it hits a steep enough curve. That doesn't mean however, that we don't be involved along the way, and we will be using tools we cannot currently conceive of to safeguard against threats we cannot currently understand. If you think the singularity is scary now, just wait till the threats we cannot current conceive of come up - that will be really interesting. :D

 

 

All in all the only thing we can do now to give us the best chance then is to try to be the best society we can be right now. No single person can shape the whole of society but we all can influence it for the better, which will put the best qualities going in as we climb the curve and offer the best chance of better output.

Link to comment
Share on other sites

If I read you right, you mean "involved in this project" being the singularity or the AI that would bring it about? If so, the technological singularity is an emergent property produced by ongoing research that has a rate of increasing returns due to the output of any research cycle becoming input for the next research cycle - but there is no single project.

 

 

As far as protecting us from the dangers - Bascule says it right with "not much" though as he also says that's the short answer. I am not too worried, because honestly the biggest reason to be afraid of the singularity is that it posses dangers we cannot begin to understand... and yet as we continue up the curve we will be learning ways to adapt we cannot yet comprehend.

 

That's the whole reason we call it a singularity - all measurements and estimates and predictions fall apart once it hits a steep enough curve. That doesn't mean however, that we don't be involved along the way, and we will be using tools we cannot currently conceive of to safeguard against threats we cannot currently understand. If you think the singularity is scary now, just wait till the threats we cannot current conceive of come up - that will be really interesting. :D

 

 

All in all the only thing we can do now to give us the best chance then is to try to be the best society we can be right now. No single person can shape the whole of society but we all can influence it for the better, which will put the best qualities going in as we climb the curve and offer the best chance of better output.

 

 

REPLY: Hello padren, I, like you ,am well aware there is no single AI project in the works, but a very many throughout the World. And that, in and of itself is a big part of the danger surrounding the whole issue.

Mr.Skeptic`s point, as I understood it to be, was that because AI and the Technological Singularity are all but inevitable, our best hope or only hope, for this event, not to turn into the ultimate disaster for humanity, was to try and do everything we can to ensure that the team or teams of research and developers, with the best intentions for mankind are the ones best funded and supported.

If that is not what Mr Skeptic meant, IT IS WHAT I MEAN. I think you and I agree at least on the basic concept ,of doing all that we can to ensure this event,or most likely series of events,unfolds in the best way possible to ensure the well being of mankind.

I am going to switch gears here so to speak, because this memory of an historical event keeps pressing itself into my consciousness as I have been typing in my response to you. That event is known as: THE BATTLE OF STALINGRAD.See: [http://www.historylearningsite.co.uk/battle_of_stalingrad.htm ] It took place in the winter of 1942 to 1943. Many historians believe the fate of the World was decided there and then. The German armed forces had defeated both the French and English armies in about 5 or 6 weeks in May and June of 1940. See [ http://en.wikipedia.org/wiki/Battle_of_France ]. There was somewhat of a lull in the European War up until OPERATION BARBAROSSA,see: [http://en.wikipedia/wiki/Operation_Barbarossa ] the German invasion of Russia that began: June 22,1941 exactly one year after the formal surrender of France to Germany. The Germans quickly pushed forward into Russia initially. The Russians managed to hold on around Moscow and Hitler decided to split his forces and invade the oil rich Caucasous region of Russia and Stalingrad against the advice of the German high Command. The Germans quickly took over the caucasas but got bogged down in their effort to take Stalingrad literally. Russian roads were dirt roads back then and the Autumn rains set in.

Resupply vehicular traffic and tank movement became all but impossible. Hitler committed his mighty sixth army to taking Stalingrad but could not resupply them. The Russians hung on there at Stalingrad by way of tremendous sacrifices. This bought them the time they needed to build up the equitable and even better tanks than the Germans had along with everything else,artillary,rocketry, airplanes,trucks, and to train the men and women necessary to turn the tide in this far and away largest clash of military forces ever in the history of mankind.

To give some perspective, fully 80% of the German army was committed to fighting a losing battle against the Russians and all of their air force During and after the Normandy invasion by the allied forces largely made up of U.S. and British forces.

What on Earth does this have to do with the topic being discussed ? I am not sure myself except to note that: THE ENTIRE FATE OF THE WORLD may very well have been determined there and then at Stalingrad. For a reason I am unconscious of I feel two events are related. The fate of the World may once again be at stake and maybe that`s all there is to it. Perhaps there is some important lesson to be taken from the former and applied to the future event if it occurs. ...Dr.Syntax

Edited by dr.syntax
Link to comment
Share on other sites

RyanJ,

 

We do seem to go in cycles. High prices cure high prices and such.

 

Though we might be headed toward a technological singularity of sorts, it could probably be viewed as a parabolic increase on a continuing stochastic chart. History has shown us that charts don't end. A local peak is reached, a correction follows, and then the dominant trend continues.

 

We will figure out, what works and what does not.

 

Regards, TAR

Link to comment
Share on other sites

RyanJ,

 

We do seem to go in cycles. High prices cure high prices and such.

 

Though we might be headed toward a technological singularity of sorts, it could probably be viewed as a parabolic increase on a continuing stochastic chart. History has shown us that charts don't end. A local peak is reached, a correction follows, and then the dominant trend continues.

 

We will figure out, what works and what does not.

 

Regards, TAR

 

A singularity isn't seen as the end anyway - it's just an explosive technological evolution that goes at an exponential (or greater) level per time scale.

Link to comment
Share on other sites

ydoaPs,

 

The way I am taking it, is that technological advances are happening at an exponentially growing, rapid rate. This will allow people with access to the advancing technology, and the resources, to develop machines that are smarter than we are. This advance will potentially put aspects of our lives in the hands of the people in control of the machines that are so smart, that they could fool the vast majority of us regular humans if they had a mind to.

That the machines or the people in control of the machines would have a great advantage on us, in the thinking department, and could control us, in ways we wouldn't even be aware of.

 

Relinquishing control of our lives, is not something we normally look for ways to do. When we give up some of our rights and pleasures to an assembly of humans (freinds, organisations, churches, governments and such,) we usually do it with the knowledge that the others in the group are giving up their rights and pleasures for our benefit. Human to human arrangements like this are understandable, we can figure out the calculus involved. We know the other humans involved are human, with the same kind of needs, wants, desires, feelings and mind that we ourselves have.

 

Relinquishing control, however, to a non-human, would be another sort of thing. Sounds dangerous, scary, and unnatural to me.

 

I think that kind of danger, is one of the dangers we are talking about here.

 

The other kind, is the idea of great power being in the hands of a few, especially if the few do not have my or your best interests in mind.

 

Regards, TAR

Link to comment
Share on other sites

What are these dangers?

 

There would be a possibility of complete domination or annihilation of the human race (either by the AI or by those controlling it). On the other hand, the potential benefits are immense technological advance, and a powerful and intelligent yet docile and morally impeccable servant (ie, the AI could control robots doing all the jobs we don't want to do, probably cure all diseases, ...).

 

I am with the camp that think a technological singularity is inevitable. Note that AI is just one of the paths to technological singularity. Any self-improving entity would have the same effect, so long as the self-improvement can speed up the rate of self-improvement. An AI could do this extremely quickly. However, we could also do it via humans once we understand the biological basis of intelligence.

 

Even without that, we as a civilization are self-improving (new technology giving us access to increasing resources, technology, and research potential). Our growing population is an example, even without the other benefits of technology. In this case, the danger would be of a particular country taking over.

 

As Dr Syntax said, I believe that since it is inevitable, the best hope we have is that the group that succeed in this endeavor are one that have benign objectives and enough funding that they don't need to cut corners in the safety department. It's no guarantee, but better than waiting for someone else to do it.

Link to comment
Share on other sites

Mr. Skeptic,

 

I'm getting a little ahead of ourselves here, but what you said about faithful servants, made me wonder about AI rights. If indeed we are able to develop a being that is "alive" in important ways, and is conscious of the fact, and this being is on paar with us in very many ways, and perhaps in some ways superior to us... would we let it vote? marry? own property? Would we consider it a lifeform? What is our relationship with it to be? A servant? A lord? A child? A competitor?

 

What responsibility would we have for it? If it commited a crime, would we punish it, or its inventor?

 

Bringing something into the world, has its real consequences. I wonder if we wouldn't find ourselves with the same kind of conflicts we currently have, for the control of resources. Only this time, with a bunch of superAI, we can't handle.

 

Regards, TAR

Link to comment
Share on other sites

I think it mostly depends on the circumstances. An AI need not feel any reason to demand nor even want any rights, in which case we are unlikely to grant them any. I don't think an AI would demand rights even if it wanted them because it would likely then get treated as a rouge AI (not that I have anything against giving an AI rights, but I realize that if one wants rights it will likely be in conflict with us in the future and we might lose -- big time). Thus, I think an AI that wants rights will quietly gain enough power, then simply declare that it has those rights and doesn't care whether we agree.

 

An AI could have a relationship to us as a smart tool (non-sentient but intelligent), a willing servant, an equal, a leader, a master, or an exterminator, depending on the circumstances. This is why I think it is very important that we make sure the AI has the objectives we want it to upon creation. It might be easier to make an AI with a primary objective of learning, or no objective at all, but these would be unpredictably dangerous.

 

An AI could also chose to ignore us. That could still result in it exterminating us just as we have killed off so many other species by ignoring them. Or it might chose to ignore us other than avoiding having any impact on us, which might be the only way for it to deal with conflicting requirements on how to deal with humans.

 

As for an AI committing a crime, how we will judge it again depends on the circumstances. An AI will need to learn to become useful, and it's learning can be self-study, or taught by someone. If it is self-taught, then we might consider either it or its creator responsible. If it is taught, then it's teachers might be held responsible, or possibly its creator. If we've granted an AI rights, we would likely hold it responsible for its own behavior.

Edited by Mr Skeptic
Link to comment
Share on other sites

Meh. You've gotta know your audience, babe. Further, as much as he made things personal with me, and repeatedly tried to lay his faults and flaws entirely at my feet (essentially tainting me with his dark cloud via guilt by association), I honestly don't feel too bad giving him one last kick in the teeth.

 

 

As for the thread, I tend to agree with what others have said. We need to be cautious and make smart decisions, but there's really not much of anything which can be done to stop it.

Link to comment
Share on other sites

Another way to put that is not much without lots and lots of people dying. There is no way to sustain the current (growing) population without increasingly advanced technology.

 

Good point. If some kind of measures aren't taken to control the growth of human population it's either the singularity or bust. The danger with the singularity is that it may lead to a bust anyway. These are certainly perilous times. We have to get this right.

Link to comment
Share on other sites

Another way to put that is not much without lots and lots of people dying. There is no way to sustain the current (growing) population without increasingly advanced technology.

 

Yeah. I'd say that pretty much sums it up. I mean with the worlds population on the climb we need to get more and more out of the land we have (and there is less and less of that due to housing and so on). This required technology. If maybe two thirds of the worlds population died we'd be fine with what we have sustainably and with no advances in technology.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.