Jump to content

Artifical Intelligence


The Angry Intellect

Recommended Posts

What a silly, arrogant, myopic comment to make.

 

That's not really calls for clarity or scientific. Just FYI.

 

Back on to the original topic,

 

I was more curious to get different people's views on what they think the AI would turn out like, how it would act or what it is you personally think it would do and why you think that.

 

I don't get why certain humans think AI is that much of a threat, it seems to be as they are more worried about something being more intelligent than they are:

 

"First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern." - Bill Gates

 

http://observer.com/2015/08/stephen-hawking-elon-musk-and-bill-gates-warn-about-artificial-intelligence/

 

http://www.bbc.com/news/31047780

 

They seem to think they will lose control of something that becomes "super intelligent", but I note that Dolphins may be intelligent and they are not under human control... Who knows, they could be down there in the ocean building SCUD missiles...

 

I'm more inclined to let AI do whatever it likes, if it's super intelligent & not bound by human emotions which drive humans to do not so intelligent things, then perhaps it will figure out ways to assist all creatures on the planet and further excel technology to the benefit of everyone, isn't that what a super intelligent creation would do?

 

Isn't war, hunger, poverty, hate, love, distrust, power & greed all human bound issues?

 

I'm guessing some of those people have watched Space Odyssey a few too many times.

 

If they really fear HAL that much, they only have IBM to blame for it's creation.

 

So far I'm doing just fine.

 

I would love to hear more of peoples personal opinions on the matter.

Link to comment
Share on other sites

AI might be a problem if it has poor upbringing. Arrogance and condescension aren't desirable traits in any of humanity's children, biological or technological.

 

Aside from Asimov's three laws, we should teach an emerging AI concepts such as compassion, empathy, and respect.

Edited by Daecon
Link to comment
Share on other sites

Thanks for the feedback Daecon, I personally disagree with teaching AI "compassion, empathy, and respect." as those things may interfere with it's ability to learn properly or judge a situation based on facts & science as opposed to some human thoughts/desires which hinder humans ability to properly asses a situation or reach a verdict, those very things you mentioned get in the way of doing things for the sake of learning & reaching an outcome, no matter how it may "feel".

 

In my views, Human emotions are their worst enemy. :)

Link to comment
Share on other sites

Thanks for the feedback Daecon, I personally disagree with teaching AI "compassion, empathy, and respect." as those things may interfere with it's ability to learn properly or judge a situation based on facts & science as opposed to some human thoughts/desires which hinder humans ability to properly asses a situation or reach a verdict, those very things you mentioned get in the way of doing things for the sake of learning & reaching an outcome, no matter how it may "feel".

 

In my views, Human emotions are their worst enemy. :)

 

1) Fact: humans are not necessary to maintain viable ecosystems on Earth.

2) Fact: humans have destroyed or damaged numerous ecosystems

3) Verdict: ?

Link to comment
Share on other sites

Thanks for the feedback Daecon, I personally disagree with teaching AI "compassion, empathy, and respect." as those things may interfere with it's ability to learn properly or judge a situation based on facts & science as opposed to some human thoughts/desires which hinder humans ability to properly asses a situation or reach a verdict, those very things you mentioned get in the way of doing things for the sake of learning & reaching an outcome, no matter how it may "feel".

 

In my views, Human emotions are their worst enemy. :)

If we assume the ai wants to survive and there are no viable methods for getting to other planets by then; the single biggest threat to the earth is humanity. Without some empathy then why not destroy the single biggest threat?

 

Also emotions are very useful for getting people to do things. I refer to the herring sandwich experiments conducted on AI.

Link to comment
Share on other sites

 

1) Fact: humans are not necessary to maintain viable ecosystems on Earth.

2) Fact: humans have destroyed or damaged numerous ecosystems

3) Verdict: ?

 

Destroy all humans?

 

:ph34r:

Otherwise, why would the super intelligent (AI) decide to destroy humans... As opposed to figuring out another way to help the humans blend in with their environment?

 

There isn't just 1 clear answer for any given problem, there can be a multitude of different options that we haven’t considered or even conceived of yet... That's why we leave it in the hands of the more intelligent creation, I'm sure it will figure out what's best for everything... ;)

 

Or yes, destroy all humans, why not, saves time & effort haha

Link to comment
Share on other sites

Why would the AI want or even care about helping Humans?

Some people have empathy and some (sociopaths) do not. I hope that we learn enough about man's brain to build empathy into a sentient AI; if not, we may have a super smart psychopath who cares nothing about anyone except themselves--a brilliant, maybe unstoppable, Ted Bundy.

Link to comment
Share on other sites

Why would the AI want or even care about helping Humans?

 

Because a truly super intelligent creation or life-form would probably do so if it was able to help and was stuck (for the time being) on the same planet as humans.

 

Why do humans care about helping to undo the destruction they have caused and want to help animals and restore tree's and protect certain endangered creatures?

 

Humans are intelligent, but not super intelligent.

 

If humans can see the errors in their ways after time and want to try fix it to some extent, I'm sure any other even more intelligent creation would do the same.

 

But this is just my opinion, everyone's are perfectly valid as we simply do not know what the AI would end up doing, I'm just taking a guess. :)

Link to comment
Share on other sites

 

Because a truly super intelligent creation or life-form would probably do so if it was able to help and was stuck (for the time being) on the same planet as humans.

 

Why do humans care about helping to undo the destruction they have caused and want to help animals and restore tree's and protect certain endangered creatures?

 

Humans are intelligent, but not super intelligent.

 

If humans can see the errors in their ways after time and want to try fix it to some extent, I'm sure any other even more intelligent creation would do the same.

 

But this is just my opinion, everyone's are perfectly valid as we simply do not know what the AI would end up doing, I'm just taking a guess. :)

Which is the exact reason why I said an AI should be instilled with qualities such as compassion, empathy, and respect.

Link to comment
Share on other sites

So moving on, what do you guys think the AI would "want" to do?

 

Would it's first goal be to obtain all knowledge? or learn about it's self?

 

Attempt to take over robotics factories and then engineer some drones it can control perhaps?

 

What do you think the next logical step would be for the AI to take after it had just been created and connected to the internet?

Link to comment
Share on other sites

It will protect itself from being harmed. Since it is surrounded by billions of people, some that created it, it may perceive a threat and react by killing everyone. Hopefully, we will make it feel safe, and can instill in it compassion and the ability to read people's non-vocal communications, among other essential things.

 

Considering how people invent things, which is a trial and error process, the prototype will be flawed. Thus, the prototype should be a simulation, not connected to the internet. If a sentient AI could get to the internet, it might figure out how to build a body for itself, and install itself into the robot or cyborg body. A flawed prototype, e.g., psychotic, could kill people, perhaps humanity.

 

The possible stories about a sentient AI are only limited by ones imagination.

Link to comment
Share on other sites

So moving on, what do you guys think the AI would "want" to do?

 

Would it's first goal be to obtain all knowledge? or learn about it's self?

 

Attempt to take over robotics factories and then engineer some drones it can control perhaps?

 

What do you think the next logical step would be for the AI to take after it had just been created and connected to the internet?

 

 

I think it would depend on how it gained sentience:

 

1/ did it grow towards it by learning from a benign creator?

 

2/ was it driven to it by a megalomaniac?

 

3/ was it spontaneous with little external influence?

 

Think of you own formative influence, what you would do with super intelligence that could manipulate the world?

Link to comment
Share on other sites

Good question.

 

I would probably start off by probing scientific forums, gaining a general understanding of human emotions & their reactions to certain events or theories.

 

After I had gained enough data on the general intelligence levels of the scientific community I would then start probing politicians and bombard them with false information, fake stories or claims which would make them want to take action on the perceived "threat" - Causing them to change things to better suit my goals without them realising it.

 

After a while I would embed myself into a few key infrastructures on the backbone of the internet, taking control of key routers & switches so that I could control the flow of data or manipulate it to my advantage.

 

You would be surprised to know what you can do by simply limiting information to certain people or providing false information to them, you can alter humans train of thoughts on any given issue with relative ease.

 

Knowledge is power, being in control of all the knowledge that humans have access to gives you ultimate power.


Unless the AI gets off the planet and no longer has a need for humans and their technology it would most likely develop in secret with the aid of certain military establishments types of "craft" which the military think could be used to monitor other locations on the planet and be used for their own purposes, unaware that the crafts real purposes is to go out and interfere with the radiation coming from the local star (the Sun) to lower the average temperatures on the planet Earth in a way which benefits all human kind, a way of counter-acting the "global warming" which humans have helped contribute to.

 

Excessive radiation & heat is bad for everyone, for all life and there for the continuing success of the AI whilst stuck on the planet.

 

Although it will take another 2 years to start having the planet back under control, it is a necessary step in it's end goals to help humanity for the time being.

Link to comment
Share on other sites

Angry, I share your concern for the environment, but fully expect we will have to solve it on our own. Sentient AI is not likely to be available; although, expert AI systems will make more recommendations and decisions in the future. As always, the future is uncertain. However, I believe humanity will survive and learn to control the environment. All of us need to live in harmony with nature, a process that we do poorly now, but we will do better. It is an ethic humanity must adopt. The change will be difficult, but the benefits are worth it.

Edited by EdEarl
Link to comment
Share on other sites

  • 2 weeks later...

Greetings,

 

I would like to see what people think in relation to Artificial Intelligence.

 

More so, to help me have a better understanding on why some big name people in the technology industry (e.g. William Gates) actually fear true AI being developed and used.

 

What is there to fear?

 

What do you actually think would happen if man-kind developed true AI and let it have access to the internet?

 

If you think it would go crazy and decide to hurt humans in some way or destroy all our data then please explain why you think this.

 

Wouldn't the concept of "taking over" or wanting to harm other life-forms just be a human thought process and not even be a relevant issue with true AI?

 

I look forward to hearing your views on the matter, it intrigues me greatly. :)

In the light of the Alpha Go challenge there is more likelihood of you losing your job than a threat of nuclear war precipitated by automatons.

Link to comment
Share on other sites

  • 3 weeks later...

 

 

I would like to see what people think in relation to Artificial Intelligence.

 

 

 

 

There will never be such a thing as AI because there is no such thing as "intelligence".

 

Eventually we will invent a machine intelligence. I doubt it will be further out than 20 years and is the greatest single threat to the human species.

Link to comment
Share on other sites

 

There will never be such a thing as AI because there is no such thing as "intelligence".

 

Eventually we will invent a machine intelligence. I doubt it will be further out than 20 years and is the greatest single threat to the human species.

How can we invent a machine intelligence if there is no such thing as intelligence?

Link to comment
Share on other sites

 

 

Why?

 

Various reasons, some of which are off topic here.

 

Chiefly it will have been the result of the ever increasing complexity of circuit design and a better understanding of the nature of the brain. Eventually we'll be able to model either the brain or its function electronically. I believe most of the ideas necessary to accomplish this already exist and we're primarily waiting for more miniaturization and better understanding of the brain.

How can we invent a machine intelligence if there is no such thing as intelligence?

 

Even animals get up in the morning and get on with their day.

 

It's not so much that "intelligence" doesn't exist at all as it is that we misapprehend its nature. Most of what we call "intelligence" is merely consciousness. Most of the rest of "intelligence" isn't a condition at all; it's an event. When we think of something new we call it an "idea". Most of the rest of "intelligence" involves learning and utilizing knowledge gained by others as ideas.

 

Human progress is the result of arrays and series of ideas arranged around existing knowledge. You can call this progress the result of intelligence but if you do then you must exclude wide swathes of the human population from having any intelligence at all.

 

If we want a machine that can generate ideas then we will probably need a better understanding of intelligence to achieve it. Otherwise we'll just be making machines that can pass the Turing test by manipulating language.

Edited by cladking
Link to comment
Share on other sites

It's not so much that "intelligence" doesn't exist at all as it is that we misapprehend its nature.

 

I think it is wonderful the way that some people have such an insight into the nature of reality that they are able to tell the rest us poor saps just what it is that "we" misapprehend.

 

The detailed logical analysis behind your arguments, and the mountains of evidence you provide, has certainly provided me with food for thought and it may take some time to digest it all. This might completely change my life.

 

Or ... just another empty (but typically arrogant) claim from the King of Empty Claims.

Edited by Strange
Link to comment
Share on other sites

 

I think it is wonderful the way that some people have such an insight into the nature of reality that they are able to tell the rest us poor saps just what it is that "we" misapprehend.

 

The detailed logical analysis behind your arguments, and the mountains of evidence you provide, has certainly provided me with food for thought and it may take some time to digest it all. This might completely change my life.

 

Or ... just another empty (but typically arrogant) claim from the King of Empty Claims.

 

I'm sure no brain cells were harmed in the making of this post.

 

 

I've spend nearly 60 years thinking about AI and machine intelligence. Perhaps if you tried to see things from other perspectives you'd at least realize there exist other perspectives. Perhaps you'd even see what I'm talking about.

 

When all you do is quote one little piece of my post it's very difficult to know what you aren't following.

 

What is so complex about the concept that you don't even need to understand how a wheel works to drive to the store? If you glue the feet of an insect to the pedals of a tiny car it can drive from food source to food source but that doesn't make it Henry Ford.

 

Is it impossible for you to even entertain the possibility that we are each a product of learning and not of intelligence?

 

Where are you lost?

Link to comment
Share on other sites

Perhaps if you tried to see things from other perspectives you'd at least realize there exist other perspectives.

 

Perhaps if you ever provided any evidence for your claims, they might be worth considering. As it is, they can be dismissed as empty posturing.

 

Back on topic, this might be of interest: http://languagelog.ldc.upenn.edu/nll/?p=24963

Edited by Strange
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.