Jump to content

Humanity, Post Humanity, A.I & Aliens


Intoscience

Recommended Posts

8 minutes ago, Genady said:

My point is that it is not a question of a wrong program but of a wrong use of it.

Is there a potential that regardless of all mitigation, to ensure a good programme, and very good close control and management of the use. That the programme could become intelligent enough to re-write it original program, whether through self-awareness, self-learning or  some other mechanism due to advanced A.I intelligence?  

Edited by Intoscience
spelling
Link to comment
Share on other sites

1 minute ago, Intoscience said:

Is there a potential that regardless of all mitigation to ensure a good program and very close and control management of the use. That the program could become intelligent enough to re-write it original program, whether through self-awareness, self-learning or  some other mechanism due to advanced A.I intelligence?  

AI isn't intelligent...

Any more than a lawn mower is intelligent because it knows how long to cut the grass...

Link to comment
Share on other sites

5 minutes ago, Intoscience said:

Is there a potential that regardless of all mitigation to ensure a good program and very close and control management of the use. That the program could become intelligent enough to re-write it original program, whether through self-awareness, self-learning or  some other mechanism due to advanced A.I intelligence?  

Let's make the question straight. I guess, you don't worry about the 'intelligent' aspect, but about the outcome. In this case, the question is:

"Is there a potential that regardless of all mitigation to ensure a good program and very close and control management of the use. That the program could re-write its original program?" 

It can be prevented.

Link to comment
Share on other sites

9 minutes ago, Genady said:

Let's make the question straight. I guess, you don't worry about the 'intelligent' aspect, but about the outcome. In this case, the question is:

"Is there a potential that regardless of all mitigation to ensure a good program and very close and control management of the use. That the program could re-write its original program?" 

It can be prevented.

Ok thanks for clarifying. this is somewhat reassuring.

14 minutes ago, dimreepr said:

The ability to think beyond one's programming. 

Self awareness then? 

Link to comment
Share on other sites

On 3/31/2023 at 1:31 PM, Intoscience said:

Ok thanks for clarifying. this is somewhat reassuring.

Self awareness then? 

AI is far more likely to benefit humanity, than defeat it; for example, there are 10 to the power of 80 atoms in the universe, but there are 10 to the power of 300 proteins available from those atoms and AI is very good at sorting the wheat from the chaff.

Link to comment
Share on other sites

14 hours ago, Endy0816 said:

Issue is a program can also be made partly random.

Can you explian exactly how that is thinking beyond it's programming?

My lawnmower isn't intelligent because it knows how short to cut the grass and it's next-door to useless if it does so randomly, because it would take me ages to mow the lawn at the right height...

23 hours ago, Genady said:

If it is programmed / trained to throw away the wheat and to keep the chaff, it will do it as well.

Absolutely, but we will still be left with two piles, done incredibly quickly and accurately, one of chaff and one of wheat; but it will be the human that decides to eat from the wrong pile... 

Edited by dimreepr
Link to comment
Share on other sites

3 hours ago, dimreepr said:

Can you explian exactly how that is thinking beyond it's programming?

My lawnmower isn't intelligent because it knows how short to cut the grass and it's next-door to useless if it does so randomly, because it would take me ages to mow the lawn at the right height...

Absolutely, but we will still be left with two piles, done incredibly quickly and accurately, one of chaff and one of wheat; but it will be the human that decides to eat from the wrong pile... 

We normally do only use randomness in very limited ways in a program. That's merely a choice though.

There may not be something it'll need to think outside of.

Link to comment
Share on other sites

19 hours ago, Endy0816 said:

We normally do only use randomness in very limited ways in a program. That's merely a choice though.

There may not be something it'll need to think outside of.

That's also true of an ant hill, which is also not intelligent; humans think outside of the box everyday. 

Link to comment
Share on other sites

6 hours ago, dimreepr said:

That's also true of an ant hill, which is also not intelligent; humans think outside of the box everyday.

True, not enough by itself, but allows the equivalent of mutations. A program can start as one thing of code and end up as another.

Link to comment
Share on other sites

17 hours ago, Endy0816 said:

True, not enough by itself, but allows the equivalent of mutations. A program can start as one thing of code and end up as another.

That's certainly the most likely way to a machine consciousness, but how would we know? 

Link to comment
Share on other sites

1 hour ago, dimreepr said:

That's certainly the most likely way to a machine consciousness, but how would we know? 

Yes, we don't know how we would know if a machine were conscious. We also don't know how we would know if a machine were intelligent. Is there a connection between these two? Does intelligence require consciousness? Does consciousness require intelligence?

Link to comment
Share on other sites

39 minutes ago, Genady said:

Yes, we don't know how we would know if a machine were conscious. We also don't know how we would know if a machine were intelligent. Is there a connection between these two? Does intelligence require consciousness? Does consciousness require intelligence?

A machine is conscious when we can't discern the difference functionally.

Link to comment
Share on other sites

1 hour ago, StringJunky said:

A machine is conscious when we can't discern the difference functionally.

I have many questions about this statement. Here are some:

1. Does it apply only to "a machine"? If so, what is "a machine"? If not, what else it is applicable to?

2. Is being conscious necessary, sufficient, or both for us being unable to discern the difference functionally? IOW:

2a. If we can't discern the difference functionally, then it is conscious? = If it is not conscious, then we can discern the difference functionally?

2b. If it is conscious, then we can't discern the difference functionally? = If we can discern the difference functionally, then it is not conscious?

2c. Both 2a and 2b?

3. What is "the difference functionally"?

Link to comment
Share on other sites

1 hour ago, Genady said:

I have many questions about this statement. Here are some:

1. Does it apply only to "a machine"? If so, what is "a machine"? If not, what else it is applicable to?

2. Is being conscious necessary, sufficient, or both for us being unable to discern the difference functionally? IOW:

2a. If we can't discern the difference functionally, then it is conscious? = If it is not conscious, then we can discern the difference functionally?

2b. If it is conscious, then we can't discern the difference functionally? = If we can discern the difference functionally, then it is not conscious?

2c. Both 2a and 2b?

3. What is "the difference functionally"?

1, A machine is an apparatus, which may be of physical or virtual construction, that can perform useful work.

2c Once it passes that test, then we know what is necessary for consciousness.

3. I was wishing to keep to the operative/process side, rather than the nuts and bolts... obviously, such a device won't be the same in construction and likely won't pass sight differentiation. 

We cannot know a priori  what we need to know. This seems to be a feature to me of emergent phenomena.

Edited by StringJunky
Link to comment
Share on other sites

2 hours ago, StringJunky said:

1, A machine is an apparatus, which may be of physical or virtual construction, that can perform useful work.

2c Once it passes that test, then we know what is necessary for consciousness.

3. I was wishing to keep to the operative/process side, rather than the nuts and bolts... obviously, such a device won't be the same in construction and likely won't pass sight differentiation. 

We cannot know a priori  what we need to know. This seems to be a feature to me of emergent phenomena.

I see a problem with 2c. If we find a difference, then it failed the test, and we know that it is not conscious. But as long as we don't find a difference, we don't know if there is a difference or there is not. How do we decide that it passed the test?

Link to comment
Share on other sites

36 minutes ago, Genady said:

I see a problem with 2c. If we find a difference, then it failed the test, and we know that it is not conscious. But as long as we don't find a difference, we don't know if there is a difference or there is not. How do we decide that it passed the test?

When a posse of expert people say it has. Confirmation has to come from that 'system' it is trying to emulate. That system is our consciousness, which is needed as the reference point. 

Edited by StringJunky
Link to comment
Share on other sites

4 minutes ago, StringJunky said:

When a posse of expert people say it has. Confirmation has to come from that 'system' it is trying to emulate. That system is our consciousness, which is needed as the reference point.

Well, expert people cannot agree if bees are or are not conscious, for example.

Link to comment
Share on other sites

3 minutes ago, Genady said:

Well, expert people cannot agree if bees are or are not conscious, for example.

Using anything other than ourselves will get you nowhere because what reference do you have, we don't know the subjective experience of bees but we do of ourselves.

Link to comment
Share on other sites

2 minutes ago, StringJunky said:

Using anything other than ourselves will get you nowhere because what reference do you have, we don't know the subjective experience of bees but we do of ourselves.

In both cases, a machine and the bees, we use ourselves as a reference. In both cases, we don't know the subjective experience of the object, a machine or the bees.

While using ourselves as reference, the expert people cannot decide if bees are conscious or not.

Link to comment
Share on other sites

13 minutes ago, Genady said:

In both cases, a machine and the bees, we use ourselves as a reference. In both cases, we don't know the subjective experience of the object, a machine or the bees.

While using ourselves as reference, the expert people cannot decide if bees are conscious or not.

Because we don't know the language/sensory model they use, so how can we know? Using other organisms is a non-starter because there is no intrinsic familiarity between bees and humans.  With humans as familiar models, we can collate, correlate subjective experiences and objective observations to bring us closer to a useful description.

Edited by StringJunky
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.