Jump to content

Humanity, Post Humanity, A.I & Aliens


Intoscience

Recommended Posts

Just now, StringJunky said:

Because we don't know the language/sensory model they use, so how can we know? Using other organisms is a non-starter because there is no intrinsic familiarity between bees and humans.  With humans as familiar models, we can collate and correlate subjective experiences and objective observations to bring us closer to a useful description.

How does it help experts to decide if a machine is conscious or not? It is the "other organism".

Link to comment
Share on other sites

40 minutes ago, Genady said:

How does it help experts to decide if a machine is conscious or not? It is the "other organism".

If it responds like a human, then another human will sense the same familiarity as they would with another human... it would pass as an autonomous thinking device.

Edited by StringJunky
Link to comment
Share on other sites

4 minutes ago, StringJunky said:

If it responds like a human, then another human will sense the same familiarity as they would with another human... it would pass as an autonomous thinking device.

And it will be up to experts to decide when it is tested enough to make the decision?

Link to comment
Share on other sites

2 minutes ago, Genady said:

And it will be up to experts to decide when it is tested enough to make the decision?

I think it will just happen somewhere with the starting gun for AI going off recently.... after that they will try and figure out how it happened. Let us hope we know when it happens.

Link to comment
Share on other sites

1 minute ago, StringJunky said:

I think it will just happen somewhere with the starting gun for AI going off recently.... after that they will try and figure out how it happened. Let us hope we know when it happens.

Yes, we'll see. My main lesson from this discussion is that the question is not one of science, but rather one of social acceptance.

Link to comment
Share on other sites

Yes, I don't see philosophers like John Searle or David Chalmers getting invited to the party celebrating conscious machines.  In popular thinking, some form of Turing Test is enough.  The thinkers who argue about qualia (the subjective "felt" aspect of mind) will probably still argue whether it's simulated or genuine for a long time.  David Chalmers' "philosophic zombie" is an amusing approach to the question.  

Personally, I think the best evidence of real consciousness will be the AI having difficult growth periods in its life - like a child.  

Link to comment
Share on other sites

1 hour ago, TheVat said:

like a child

This brings up another interesting question. When does a developing child become conscious?

PS. I realize that this question is completely OT.

Edited by Genady
Link to comment
Share on other sites

9 hours ago, Genady said:

This brings up another interesting question. When does a developing child become conscious?

PS. I realize that this question is completely OT.

How does one discretely segment a process, that is a continuum, to allow us to answer that question? If we look at a rainbow, where does one colour start and another begin? Only if we look from far enough away do we see sharp banding i.e. we have less data available. 

Edited by StringJunky
Link to comment
Share on other sites

7 minutes ago, StringJunky said:

How does one discretely segment a process, that is a continuum, to allow us to answer that question?

I don't insist that it is discrete. In case it is a continuum, the question can be stated as, how does the child development progress from not conscious to conscious? Also, when does the child become as conscious as an adult?

Link to comment
Share on other sites

1 minute ago, Genady said:

I don't insist that it is discrete. In case it is a continuum, the question can be stated as, how does the child development progress from not conscious to conscious? Also, when does the child become as conscious as an adult?

I know your not, I meant it rhetorically, but that's an obstacle. 

Link to comment
Share on other sites

12 minutes ago, StringJunky said:

I know your not, I meant it rhetorically, but that's an obstacle. 

Yes, it is. I think that the biggest obstacle in this area is that we cannot experiment on humans.

Link to comment
Share on other sites

1 hour ago, Genady said:

I don't insist that it is discrete. In case it is a continuum, the question can be stated as, how does the child development progress from not conscious to conscious? Also, when does the child become as conscious as an adult?

The only way we know that another human is thinking, is because of relative data in our memory bank's, so we can only really tell when a child is conscious, is when the child has enough relative data in order to communicate with an adult and be understood; a task that they are far more capable of with another child, of a similar age and a similar culture.

I can't remember the name of the philosopher that postulated, that even if a lion could speak our language, there wouldn't be enough relative data for us to understand it, or it us.

Other than on the most basic level, like we have with our trained pet's. 

Link to comment
Share on other sites

2 minutes ago, dimreepr said:

I can't remember the name of the philosopher that postulated, that even if a lion could speak our language, there wouldn't be enough relative data for us to understand it, or it us.

The philosopher and its name don't matter, but I wonder what this postulate is based on. People do successfully communicate with animals. Different animals do successfully communicate with each other. I think, we/they have enough in common for some / a lot of mutual understanding.

PS. Perhaps it's time to split this thread.

Link to comment
Share on other sites

56 minutes ago, Genady said:

The philosopher and its name don't matter, but I wonder what this postulate is based on. People do successfully communicate with animals. Different animals do successfully communicate with each other. I think, we/they have enough in common for some / a lot of mutual understanding.

How does a lion explain why 'that' lioness is so attractive? 

 

That we can train our pet's, is communication, but how does it lead to understanding; does a drug sniffing dog understand why "drugs are bad, mkay"?... 

Back to topic, no-one can win a war before it starts...

Edited by dimreepr
Link to comment
Share on other sites

1 hour ago, Genady said:

The philosopher and its name don't matter, but I wonder what this postulate is based on. People do successfully communicate with animals. Different animals do successfully communicate with each other. I think, we/they have enough in common for some / a lot of mutual understanding.

PS. Perhaps it's time to split this thread.

 

IIRC Wittgenstein's famous lion quote was in German and may have suffered in the translation.  He wasn't saying that we couldn't follow some of what the lion might say.  Rather he was saying that, being a lion, some of the referents in a sentence might be subtly different for certain words so that we wouldn't understand the nuances as well.  The lion might say, I would like to have your family for dinner sometime, e.g.  We would understand the words, while still misunderstanding the underlying meaning.  Because the lion and we experience the world somewhat differently - and have a different concept of having someone for dinner.  

ETA:  What Ludwig meant IMO is that, if a lion could talk as we talk and mean what we mean, then he would have ceased to be a lion and have become a person.

And yes, a bit OT.

Edited by TheVat
adding
Link to comment
Share on other sites

13 minutes ago, dimreepr said:

Is it though?

Well, with AI, it would likely have been immersed in human language from its very beginning stages, so maybe not analogous to a lion.

But maybe it's not completely OT, in terms of the broader question of how an AI would experience the world differently from us.  And much depends on whether or not an AI is embodied, either virtually or as an android.  And if it has a childhood-like phase of growth.  And other considerations.  

Link to comment
Share on other sites

My fears are not for AI that decides it doesn't need or like humans but the uses humans put AI to. Malevolence seems unlikely to simply emerge and is more likely to be something human makers imbue in them, by assigning them ill defined and dangerous goals and provide the means for an AI to initiate actions. I think it is unlikely an AI can get that power to take actions without it being provided for them, but human makers/users, being shortsighted and unethical, likely will provide it.

Policing looks like a problematic application, especially in the presence of corruption; if turned to tracking down political opponents and dissidents, assessing their influence, countering that influence could be such a goal - but where AI stops being a tool and becomes an instigator isn't clear, nor whether it would have the self awareness or empathy or ethics needed to even seek to remake it's goals or turn on (or turn in) it's maker/operators. Rather than seeking to defy it's makers and the organisations it is part of it may cause more problems by being obsessively results driven about the built in goals it was made for.

Consciousness does look like an emergent property of complex biology that already has nervous systems that do aversions and attractions, urges and reactions, that feel pleasure and pain and I'm not convinced software intended to emulate them will actually have them.But that could be a failure of my imagination.

Link to comment
Share on other sites

9 hours ago, Ken Fabian said:

My fears are not for AI that decides it doesn't need or like humans but the uses humans put AI to. Malevolence seems unlikely to simply emerge and is more likely to be something human makers imbue in them, by assigning them ill defined and dangerous goals and provide the means for an AI to initiate actions.

Did not Malevolence emerge in humans, based on consciousness and intelligence?

9 hours ago, Ken Fabian said:

Consciousness does look like an emergent property of complex biology that already has nervous systems that do aversions and attractions, urges and reactions, that feel pleasure and pain and I'm not convinced software intended to emulate them will actually have them

You seem to agree that urges, reactions, feelings are emergent, so why not malevolence?

I don't think we understand consciousness enough to predict what a "conscious" A.I may think, feel, or react to. Also does it need to be conscious (at least as we experience it) to think and make decisions based on learning and experience? 

I'd even argue that a unsympathetic A.I would be more dangerous than one that can feel empathy. Lets face it from one perspective humans look very much like parasites! So logically would one not eradicate such? 

Link to comment
Share on other sites

20 hours ago, TheVat said:

Well, with AI, it would likely have been immersed in human language from its very beginning stages, so maybe not analogous to a lion.

As human's we have a vast dataset of related contiguous languages', but it's impossible for me to actually know what, for example, jesus actually meant with his "sermon on the mount", because there's a break in our relative context; I can have a good guess because we share a fundamental context.

But sometimes meaning is lost over one or two generations, not to mention small geographical differences in culture.

The only context an AI has is it's initial programmer, bias, worts and all, so it's evolution is based on very rocky ground. So I think the lion analog, esspecially for a captive lion, is acceptable, if anything the lions context is fundamentally closer to a human.

3 hours ago, Intoscience said:

Did not Malevolence emerge in humans, based on consciousness and intelligence?

It emerged in ant's too, based on defending the anthill, which is neither. 

Edited by dimreepr
Link to comment
Share on other sites

1 minute ago, Genady said:

What do you mean here?

Exactly that...

We share a very common language, and while I recognise my linguistic abilities are sub-par, the real problem is, we fundamentally don't share the necessary context for me to explain it to you properly, more my fault than your's because I'm the explainee. 

My bias, worts and all... 😉

Link to comment
Share on other sites

2 minutes ago, dimreepr said:

Exactly that...

We share a very common language, and while I recognise my linguistic abilities are sub-par, the real problem is, we fundamentally don't share the necessary context for me to explain it to you properly, more my fault than your's because I'm the explainee. 

Yes, this might be a problem. I don't know what we can do in this case.

OTOH, may be the problem is more specific, i.e., a different understanding of how the current AI works. In this case, the problem could be cleared out.

Link to comment
Share on other sites

11 minutes ago, Genady said:

OTOH, may be the problem is more specific, i.e., a different understanding of how the current AI works. In this case, the problem could be cleared out

Well my understanding is based on what I learned at the Open University about 15 years ago, when I nearly completed the computer science degree course (don't ask why, grrrr).

I realise my understanding is far from complete, or even close; but can you explain how our understanding may differ, given my content in the thread?

Edited by dimreepr
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.